Test Report: Docker_Windows 18259

                    
                      540f885a6d6e66248f116de2dd0a4210cbfa2dfa:2024-02-29:33352
                    
                

Test fail (12/321)

x
+
TestErrorSpam/setup (65.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-059300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-059300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 --driver=docker: (1m5.6384818s)
error_spam_test.go:96: unexpected stderr: "W0229 17:49:08.210967   12204 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-059300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
- KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
- MINIKUBE_LOCATION=18259
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node nospam-059300 in cluster nospam-059300
* Pulling base image v0.0.42-1708944392-18244 ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-059300" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0229 17:49:08.210967   12204 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (65.64s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (6.35s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-686300
helpers_test.go:235: (dbg) docker inspect functional-686300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b432bb7d7db19571429b1626acfbf36168a27b085d3d52d265ee8a3e053d09a",
	        "Created": "2024-02-29T17:51:28.93478312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 24154,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T17:51:29.584885548Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/8b432bb7d7db19571429b1626acfbf36168a27b085d3d52d265ee8a3e053d09a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b432bb7d7db19571429b1626acfbf36168a27b085d3d52d265ee8a3e053d09a/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b432bb7d7db19571429b1626acfbf36168a27b085d3d52d265ee8a3e053d09a/hosts",
	        "LogPath": "/var/lib/docker/containers/8b432bb7d7db19571429b1626acfbf36168a27b085d3d52d265ee8a3e053d09a/8b432bb7d7db19571429b1626acfbf36168a27b085d3d52d265ee8a3e053d09a-json.log",
	        "Name": "/functional-686300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-686300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-686300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bd1feff9721a216f61a4a0f299ae1663aa179c4b80a925fa2d9a30f4bca4ddcc-init/diff:/var/lib/docker/overlay2/93b520212bad25395214c0a2a80384ead8baa0a1e04ab69f20509c9ef347fcc7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd1feff9721a216f61a4a0f299ae1663aa179c4b80a925fa2d9a30f4bca4ddcc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd1feff9721a216f61a4a0f299ae1663aa179c4b80a925fa2d9a30f4bca4ddcc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd1feff9721a216f61a4a0f299ae1663aa179c4b80a925fa2d9a30f4bca4ddcc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-686300",
	                "Source": "/var/lib/docker/volumes/functional-686300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-686300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-686300",
	                "name.minikube.sigs.k8s.io": "functional-686300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ec90ee67bf3cb5f5602e5e55dfd691cd784808084baef1b37888389be40d09f",
	            "SandboxKey": "/var/run/docker/netns/3ec90ee67bf3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57190"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57191"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57187"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57188"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57189"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-686300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8b432bb7d7db",
	                        "functional-686300"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "99bf6af32ca3a66c163c66474816a4e31b34bcdd256dda1d7ed0cbe8b5a7c5bc",
	                    "EndpointID": "5d704a070fec57d5d3eb31f5e202b9aee8babaaec92a1231bd11dd4b97d4d22c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "functional-686300",
	                        "8b432bb7d7db"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-686300 -n functional-686300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-686300 -n functional-686300: (1.296855s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 logs -n 25: (2.4019953s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-059300 --log_dir                                     | nospam-059300     | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:50 UTC | 29 Feb 24 17:50 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-059300 --log_dir                                     | nospam-059300     | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:50 UTC | 29 Feb 24 17:50 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-059300 --log_dir                                     | nospam-059300     | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:50 UTC | 29 Feb 24 17:50 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-059300 --log_dir                                     | nospam-059300     | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:50 UTC | 29 Feb 24 17:50 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-059300 --log_dir                                     | nospam-059300     | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:50 UTC | 29 Feb 24 17:50 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-059300 --log_dir                                     | nospam-059300     | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:50 UTC | 29 Feb 24 17:50 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-059300 --log_dir                                     | nospam-059300     | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:50 UTC | 29 Feb 24 17:50 UTC |
	|         | C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-059300                                            | nospam-059300     | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:50 UTC | 29 Feb 24 17:50 UTC |
	| start   | -p functional-686300                                        | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:50 UTC | 29 Feb 24 17:52 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=docker                                  |                   |                   |         |                     |                     |
	| start   | -p functional-686300                                        | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:52 UTC | 29 Feb 24 17:53 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-686300 cache add                                 | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-686300 cache add                                 | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-686300 cache add                                 | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-686300 cache add                                 | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	|         | minikube-local-cache-test:functional-686300                 |                   |                   |         |                     |                     |
	| cache   | functional-686300 cache delete                              | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	|         | minikube-local-cache-test:functional-686300                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	| ssh     | functional-686300 ssh sudo                                  | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-686300                                           | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-686300 ssh                                       | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-686300 cache reload                              | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	| ssh     | functional-686300 ssh                                       | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-686300 kubectl --                                | functional-686300 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:53 UTC | 29 Feb 24 17:53 UTC |
	|         | --context functional-686300                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:52:34
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:52:34.270356   11452 out.go:291] Setting OutFile to fd 772 ...
	I0229 17:52:34.271182   11452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:52:34.271182   11452 out.go:304] Setting ErrFile to fd 744...
	I0229 17:52:34.271182   11452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:52:34.292981   11452 out.go:298] Setting JSON to false
	I0229 17:52:34.294974   11452 start.go:129] hostinfo: {"hostname":"minikube7","uptime":7114,"bootTime":1709222039,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 17:52:34.294974   11452 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 17:52:34.299491   11452 out.go:177] * [functional-686300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 17:52:34.306117   11452 notify.go:220] Checking for updates...
	I0229 17:52:34.310744   11452 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 17:52:34.313213   11452 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:52:34.316075   11452 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 17:52:34.319226   11452 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:52:34.322096   11452 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:52:34.323388   11452 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 17:52:34.323388   11452 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:52:34.590677   11452 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 17:52:34.600019   11452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 17:52:34.960590   11452 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:82 SystemTime:2024-02-29 17:52:34.90805648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0229 17:52:34.965051   11452 out.go:177] * Using the docker driver based on existing profile
	I0229 17:52:34.967034   11452 start.go:299] selected driver: docker
	I0229 17:52:34.967034   11452 start.go:903] validating driver "docker" against &{Name:functional-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-686300 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:52:34.967613   11452 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:52:34.979511   11452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 17:52:35.362531   11452 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:82 SystemTime:2024-02-29 17:52:35.313905216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 17:52:35.463408   11452 cni.go:84] Creating CNI manager for ""
	I0229 17:52:35.463408   11452 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 17:52:35.463408   11452 start_flags.go:323] config:
	{Name:functional-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-686300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:52:35.467903   11452 out.go:177] * Starting control plane node functional-686300 in cluster functional-686300
	I0229 17:52:35.471342   11452 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 17:52:35.474435   11452 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 17:52:35.475748   11452 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 17:52:35.477368   11452 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 17:52:35.477458   11452 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 17:52:35.477458   11452 cache.go:56] Caching tarball of preloaded images
	I0229 17:52:35.477458   11452 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 17:52:35.478130   11452 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 17:52:35.478357   11452 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\config.json ...
	I0229 17:52:35.636932   11452 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 17:52:35.636932   11452 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 17:52:35.636932   11452 cache.go:194] Successfully downloaded all kic artifacts
	I0229 17:52:35.637461   11452 start.go:365] acquiring machines lock for functional-686300: {Name:mkcf195a253c18f584996512d0ad0e6c4dd7e316 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:52:35.637672   11452 start.go:369] acquired machines lock for "functional-686300" in 98µs
	I0229 17:52:35.637874   11452 start.go:96] Skipping create...Using existing machine configuration
	I0229 17:52:35.637874   11452 fix.go:54] fixHost starting: 
	I0229 17:52:35.654477   11452 cli_runner.go:164] Run: docker container inspect functional-686300 --format={{.State.Status}}
	I0229 17:52:35.819384   11452 fix.go:102] recreateIfNeeded on functional-686300: state=Running err=<nil>
	W0229 17:52:35.819454   11452 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 17:52:35.823467   11452 out.go:177] * Updating the running docker "functional-686300" container ...
	I0229 17:52:35.825828   11452 machine.go:88] provisioning docker machine ...
	I0229 17:52:35.825828   11452 ubuntu.go:169] provisioning hostname "functional-686300"
	I0229 17:52:35.835901   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
	I0229 17:52:35.997965   11452 main.go:141] libmachine: Using SSH client type: native
	I0229 17:52:35.998076   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 57190 <nil> <nil>}
	I0229 17:52:35.998076   11452 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-686300 && echo "functional-686300" | sudo tee /etc/hostname
	I0229 17:52:36.192169   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-686300
	
	I0229 17:52:36.204282   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
	I0229 17:52:36.382425   11452 main.go:141] libmachine: Using SSH client type: native
	I0229 17:52:36.382454   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 57190 <nil> <nil>}
	I0229 17:52:36.382454   11452 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-686300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-686300/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-686300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 17:52:36.553335   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 17:52:36.553572   11452 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0229 17:52:36.553631   11452 ubuntu.go:177] setting up certificates
	I0229 17:52:36.553631   11452 provision.go:83] configureAuth start
	I0229 17:52:36.563000   11452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-686300
	I0229 17:52:36.736788   11452 provision.go:138] copyHostCerts
	I0229 17:52:36.736788   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0229 17:52:36.737323   11452 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0229 17:52:36.737471   11452 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0229 17:52:36.737945   11452 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0229 17:52:36.738723   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0229 17:52:36.738723   11452 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0229 17:52:36.738723   11452 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0229 17:52:36.739438   11452 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 17:52:36.740231   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0229 17:52:36.740231   11452 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0229 17:52:36.740764   11452 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0229 17:52:36.740996   11452 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 17:52:36.740996   11452 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-686300 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-686300]
	I0229 17:52:37.101306   11452 provision.go:172] copyRemoteCerts
	I0229 17:52:37.117660   11452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 17:52:37.132438   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
	I0229 17:52:37.301598   11452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57190 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-686300\id_rsa Username:docker}
	I0229 17:52:37.421702   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 17:52:37.422420   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 17:52:37.459075   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 17:52:37.459075   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 17:52:37.499635   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 17:52:37.500320   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 17:52:37.537116   11452 provision.go:86] duration metric: configureAuth took 983.4776ms
	I0229 17:52:37.537116   11452 ubuntu.go:193] setting minikube options for container-runtime
	I0229 17:52:37.538157   11452 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 17:52:37.548642   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
	I0229 17:52:37.729609   11452 main.go:141] libmachine: Using SSH client type: native
	I0229 17:52:37.730343   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 57190 <nil> <nil>}
	I0229 17:52:37.730343   11452 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 17:52:37.905351   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0229 17:52:37.905351   11452 ubuntu.go:71] root file system type: overlay
	I0229 17:52:37.905989   11452 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 17:52:37.916210   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
	I0229 17:52:38.081027   11452 main.go:141] libmachine: Using SSH client type: native
	I0229 17:52:38.081804   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 57190 <nil> <nil>}
	I0229 17:52:38.081804   11452 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 17:52:38.272554   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 17:52:38.282401   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
	I0229 17:52:38.446750   11452 main.go:141] libmachine: Using SSH client type: native
	I0229 17:52:38.447435   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 57190 <nil> <nil>}
	I0229 17:52:38.447435   11452 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 17:52:38.620962   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 17:52:38.620962   11452 machine.go:91] provisioned docker machine in 2.7951132s
	I0229 17:52:38.620962   11452 start.go:300] post-start starting for "functional-686300" (driver="docker")
	I0229 17:52:38.620962   11452 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 17:52:38.633442   11452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 17:52:38.639969   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
	I0229 17:52:38.819192   11452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57190 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-686300\id_rsa Username:docker}
	I0229 17:52:38.961011   11452 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 17:52:38.971941   11452 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0229 17:52:38.971941   11452 command_runner.go:130] > NAME="Ubuntu"
	I0229 17:52:38.971941   11452 command_runner.go:130] > VERSION_ID="22.04"
	I0229 17:52:38.971941   11452 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0229 17:52:38.971941   11452 command_runner.go:130] > VERSION_CODENAME=jammy
	I0229 17:52:38.971941   11452 command_runner.go:130] > ID=ubuntu
	I0229 17:52:38.971941   11452 command_runner.go:130] > ID_LIKE=debian
	I0229 17:52:38.971941   11452 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0229 17:52:38.971941   11452 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0229 17:52:38.971941   11452 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0229 17:52:38.971941   11452 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0229 17:52:38.971941   11452 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0229 17:52:38.972491   11452 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 17:52:38.972605   11452 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 17:52:38.972645   11452 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 17:52:38.972645   11452 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 17:52:38.972645   11452 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0229 17:52:38.972645   11452 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0229 17:52:38.973983   11452 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem -> 56602.pem in /etc/ssl/certs
	I0229 17:52:38.973983   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem -> /etc/ssl/certs/56602.pem
	I0229 17:52:38.974847   11452 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\5660\hosts -> hosts in /etc/test/nested/copy/5660
	I0229 17:52:38.974847   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\5660\hosts -> /etc/test/nested/copy/5660/hosts
	I0229 17:52:38.987707   11452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/5660
	I0229 17:52:39.004840   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /etc/ssl/certs/56602.pem (1708 bytes)
	I0229 17:52:39.050176   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\5660\hosts --> /etc/test/nested/copy/5660/hosts (40 bytes)
	I0229 17:52:39.091965   11452 start.go:303] post-start completed in 471.0003ms
	I0229 17:52:39.103886   11452 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 17:52:39.112383   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
	I0229 17:52:39.277157   11452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57190 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-686300\id_rsa Username:docker}
	I0229 17:52:39.383355   11452 command_runner.go:130] > 1%!
	(MISSING)I0229 17:52:39.395935   11452 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 17:52:39.408029   11452 command_runner.go:130] > 952G
	I0229 17:52:39.408029   11452 fix.go:56] fixHost completed within 3.7701276s
	I0229 17:52:39.408548   11452 start.go:83] releasing machines lock for "functional-686300", held for 3.7703297s
	I0229 17:52:39.416789   11452 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-686300
	I0229 17:52:39.583019   11452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 17:52:39.593424   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
	I0229 17:52:39.594578   11452 ssh_runner.go:195] Run: cat /version.json
	I0229 17:52:39.599922   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
	I0229 17:52:39.784665   11452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57190 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-686300\id_rsa Username:docker}
	I0229 17:52:39.802860   11452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57190 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-686300\id_rsa Username:docker}
	I0229 17:52:39.913471   11452 command_runner.go:130] > {"iso_version": "v1.32.1-1708020063-17936", "kicbase_version": "v0.0.42-1708944392-18244", "minikube_version": "v1.32.0", "commit": "720d09bdc48bd298443860ffedca89b22332df12"}
	I0229 17:52:39.926814   11452 ssh_runner.go:195] Run: systemctl --version
	I0229 17:52:40.064232   11452 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 17:52:40.067375   11452 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0229 17:52:40.067375   11452 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0229 17:52:40.078524   11452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 17:52:40.089584   11452 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0229 17:52:40.089584   11452 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0229 17:52:40.089584   11452 command_runner.go:130] > Device: d9h/217d	Inode: 215         Links: 1
	I0229 17:52:40.089584   11452 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 17:52:40.089584   11452 command_runner.go:130] > Access: 2024-02-29 17:40:04.246299412 +0000
	I0229 17:52:40.089584   11452 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0229 17:52:40.089584   11452 command_runner.go:130] > Change: 2024-02-29 17:39:33.093074733 +0000
	I0229 17:52:40.089584   11452 command_runner.go:130] >  Birth: 2024-02-29 17:39:33.093074733 +0000
	I0229 17:52:40.101815   11452 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0229 17:52:40.121010   11452 command_runner.go:130] ! find: '\\etc\\cni\\net.d': No such file or directory
	W0229 17:52:40.122377   11452 start.go:419] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0229 17:52:40.134837   11452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 17:52:40.156512   11452 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 17:52:40.156512   11452 start.go:475] detecting cgroup driver to use...
	I0229 17:52:40.156512   11452 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 17:52:40.157033   11452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 17:52:40.220589   11452 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 17:52:40.233547   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 17:52:40.325897   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 17:52:40.347867   11452 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 17:52:40.358988   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 17:52:40.391894   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 17:52:40.422272   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 17:52:40.456241   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 17:52:40.490827   11452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 17:52:40.524810   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 17:52:40.558530   11452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 17:52:40.577673   11452 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 17:52:40.589917   11452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 17:52:40.620848   11452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:52:40.776494   11452 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 17:52:51.066787   11452 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.2901155s)
	I0229 17:52:51.066870   11452 start.go:475] detecting cgroup driver to use...
	I0229 17:52:51.066870   11452 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 17:52:51.080242   11452 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 17:52:51.107193   11452 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0229 17:52:51.107193   11452 command_runner.go:130] > [Unit]
	I0229 17:52:51.107193   11452 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 17:52:51.107193   11452 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 17:52:51.107193   11452 command_runner.go:130] > BindsTo=containerd.service
	I0229 17:52:51.107193   11452 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0229 17:52:51.107193   11452 command_runner.go:130] > Wants=network-online.target
	I0229 17:52:51.107193   11452 command_runner.go:130] > Requires=docker.socket
	I0229 17:52:51.107193   11452 command_runner.go:130] > StartLimitBurst=3
	I0229 17:52:51.107193   11452 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 17:52:51.107193   11452 command_runner.go:130] > [Service]
	I0229 17:52:51.107193   11452 command_runner.go:130] > Type=notify
	I0229 17:52:51.107193   11452 command_runner.go:130] > Restart=on-failure
	I0229 17:52:51.107193   11452 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 17:52:51.107193   11452 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 17:52:51.107193   11452 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 17:52:51.107193   11452 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 17:52:51.107193   11452 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 17:52:51.107193   11452 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 17:52:51.107193   11452 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 17:52:51.107193   11452 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 17:52:51.107845   11452 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 17:52:51.107845   11452 command_runner.go:130] > ExecStart=
	I0229 17:52:51.107845   11452 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0229 17:52:51.107845   11452 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 17:52:51.107845   11452 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 17:52:51.107845   11452 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 17:52:51.107845   11452 command_runner.go:130] > LimitNOFILE=infinity
	I0229 17:52:51.107845   11452 command_runner.go:130] > LimitNPROC=infinity
	I0229 17:52:51.107845   11452 command_runner.go:130] > LimitCORE=infinity
	I0229 17:52:51.107845   11452 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 17:52:51.107845   11452 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 17:52:51.108114   11452 command_runner.go:130] > TasksMax=infinity
	I0229 17:52:51.108114   11452 command_runner.go:130] > TimeoutStartSec=0
	I0229 17:52:51.108147   11452 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 17:52:51.108147   11452 command_runner.go:130] > Delegate=yes
	I0229 17:52:51.108147   11452 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 17:52:51.108184   11452 command_runner.go:130] > KillMode=process
	I0229 17:52:51.108184   11452 command_runner.go:130] > [Install]
	I0229 17:52:51.108216   11452 command_runner.go:130] > WantedBy=multi-user.target
	I0229 17:52:51.108216   11452 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0229 17:52:51.121026   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 17:52:51.143098   11452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 17:52:51.173572   11452 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 17:52:51.185899   11452 ssh_runner.go:195] Run: which cri-dockerd
	I0229 17:52:51.198355   11452 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 17:52:51.211006   11452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 17:52:51.230743   11452 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 17:52:51.274550   11452 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 17:52:51.460595   11452 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 17:52:51.613492   11452 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 17:52:51.613621   11452 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 17:52:51.657357   11452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:52:51.828154   11452 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 17:52:52.502108   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 17:52:52.536622   11452 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0229 17:52:52.578800   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 17:52:52.611057   11452 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 17:52:52.752443   11452 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 17:52:52.936705   11452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:52:53.084326   11452 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 17:52:53.123745   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 17:52:53.164011   11452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:52:53.362415   11452 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 17:52:53.493303   11452 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 17:52:53.505581   11452 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 17:52:53.517579   11452 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 17:52:53.517579   11452 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 17:52:53.517579   11452 command_runner.go:130] > Device: e2h/226d	Inode: 647         Links: 1
	I0229 17:52:53.517649   11452 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0229 17:52:53.517649   11452 command_runner.go:130] > Access: 2024-02-29 17:52:53.377081736 +0000
	I0229 17:52:53.517649   11452 command_runner.go:130] > Modify: 2024-02-29 17:52:53.377081736 +0000
	I0229 17:52:53.517649   11452 command_runner.go:130] > Change: 2024-02-29 17:52:53.377081736 +0000
	I0229 17:52:53.517649   11452 command_runner.go:130] >  Birth: -
	I0229 17:52:53.517701   11452 start.go:543] Will wait 60s for crictl version
	I0229 17:52:53.527997   11452 ssh_runner.go:195] Run: which crictl
	I0229 17:52:53.534484   11452 command_runner.go:130] > /usr/bin/crictl
	I0229 17:52:53.547773   11452 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 17:52:53.630433   11452 command_runner.go:130] > Version:  0.1.0
	I0229 17:52:53.630532   11452 command_runner.go:130] > RuntimeName:  docker
	I0229 17:52:53.630532   11452 command_runner.go:130] > RuntimeVersion:  25.0.3
	I0229 17:52:53.630532   11452 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 17:52:53.630532   11452 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0229 17:52:53.640707   11452 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 17:52:53.684886   11452 command_runner.go:130] > 25.0.3
	I0229 17:52:53.693014   11452 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 17:52:53.733309   11452 command_runner.go:130] > 25.0.3
	I0229 17:52:53.736800   11452 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0229 17:52:53.743571   11452 cli_runner.go:164] Run: docker exec -t functional-686300 dig +short host.docker.internal
	I0229 17:52:54.001436   11452 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0229 17:52:54.013162   11452 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0229 17:52:54.025289   11452 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I0229 17:52:54.033892   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-686300
	I0229 17:52:54.188603   11452 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 17:52:54.197432   11452 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 17:52:54.237702   11452 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 17:52:54.237702   11452 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 17:52:54.237702   11452 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 17:52:54.237702   11452 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 17:52:54.237702   11452 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 17:52:54.237702   11452 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 17:52:54.237702   11452 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 17:52:54.237806   11452 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:52:54.237868   11452 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 17:52:54.237933   11452 docker.go:615] Images already preloaded, skipping extraction
	I0229 17:52:54.246402   11452 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 17:52:54.282383   11452 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 17:52:54.283298   11452 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 17:52:54.283399   11452 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 17:52:54.283399   11452 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 17:52:54.283426   11452 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 17:52:54.283426   11452 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 17:52:54.283426   11452 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 17:52:54.283426   11452 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:52:54.283501   11452 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 17:52:54.283501   11452 cache_images.go:84] Images are preloaded, skipping loading
	I0229 17:52:54.292966   11452 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 17:52:54.383215   11452 command_runner.go:130] > cgroupfs
	I0229 17:52:54.383305   11452 cni.go:84] Creating CNI manager for ""
	I0229 17:52:54.383513   11452 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 17:52:54.383513   11452 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 17:52:54.383513   11452 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-686300 NodeName:functional-686300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 17:52:54.383815   11452 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-686300"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 17:52:54.383955   11452 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-686300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-686300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0229 17:52:54.394583   11452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 17:52:54.414122   11452 command_runner.go:130] > kubeadm
	I0229 17:52:54.414122   11452 command_runner.go:130] > kubectl
	I0229 17:52:54.414122   11452 command_runner.go:130] > kubelet
	I0229 17:52:54.414122   11452 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 17:52:54.424745   11452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 17:52:54.442271   11452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0229 17:52:54.470185   11452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 17:52:54.498179   11452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0229 17:52:54.538480   11452 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0229 17:52:54.548285   11452 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0229 17:52:54.548285   11452 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300 for IP: 192.168.49.2
	I0229 17:52:54.548285   11452 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:52:54.549376   11452 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0229 17:52:54.549630   11452 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0229 17:52:54.550481   11452 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.key
	I0229 17:52:54.550481   11452 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\apiserver.key.dd3b5fb2
	I0229 17:52:54.551251   11452 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\proxy-client.key
	I0229 17:52:54.551251   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 17:52:54.551251   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 17:52:54.551251   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 17:52:54.551251   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 17:52:54.551812   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 17:52:54.551812   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 17:52:54.551812   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 17:52:54.551812   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 17:52:54.552731   11452 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem (1338 bytes)
	W0229 17:52:54.553254   11452 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660_empty.pem, impossibly tiny 0 bytes
	I0229 17:52:54.553337   11452 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0229 17:52:54.553337   11452 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0229 17:52:54.553872   11452 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 17:52:54.553975   11452 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 17:52:54.554608   11452 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem (1708 bytes)
	I0229 17:52:54.554608   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem -> /usr/share/ca-certificates/56602.pem
	I0229 17:52:54.554608   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:52:54.555320   11452 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem -> /usr/share/ca-certificates/5660.pem
	I0229 17:52:54.556914   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 17:52:54.598582   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 17:52:54.636692   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 17:52:54.674395   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 17:52:54.714266   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 17:52:54.754365   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 17:52:54.795617   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 17:52:54.832692   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 17:52:54.873953   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /usr/share/ca-certificates/56602.pem (1708 bytes)
	I0229 17:52:54.916261   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 17:52:54.958291   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem --> /usr/share/ca-certificates/5660.pem (1338 bytes)
	I0229 17:52:55.015741   11452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 17:52:55.060017   11452 ssh_runner.go:195] Run: openssl version
	I0229 17:52:55.073601   11452 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0229 17:52:55.085305   11452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56602.pem && ln -fs /usr/share/ca-certificates/56602.pem /etc/ssl/certs/56602.pem"
	I0229 17:52:55.117908   11452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56602.pem
	I0229 17:52:55.134382   11452 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 17:50 /usr/share/ca-certificates/56602.pem
	I0229 17:52:55.135271   11452 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:50 /usr/share/ca-certificates/56602.pem
	I0229 17:52:55.145494   11452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56602.pem
	I0229 17:52:55.166418   11452 command_runner.go:130] > 3ec20f2e
	I0229 17:52:55.176997   11452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56602.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 17:52:55.206652   11452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 17:52:55.240480   11452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:52:55.255207   11452 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:52:55.255207   11452 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:52:55.266387   11452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:52:55.282562   11452 command_runner.go:130] > b5213941
	I0229 17:52:55.294263   11452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 17:52:55.324069   11452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5660.pem && ln -fs /usr/share/ca-certificates/5660.pem /etc/ssl/certs/5660.pem"
	I0229 17:52:55.354963   11452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5660.pem
	I0229 17:52:55.369200   11452 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 17:50 /usr/share/ca-certificates/5660.pem
	I0229 17:52:55.369236   11452 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:50 /usr/share/ca-certificates/5660.pem
	I0229 17:52:55.380793   11452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5660.pem
	I0229 17:52:55.394019   11452 command_runner.go:130] > 51391683
	I0229 17:52:55.402835   11452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5660.pem /etc/ssl/certs/51391683.0"
	I0229 17:52:55.433298   11452 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 17:52:55.444128   11452 command_runner.go:130] > ca.crt
	I0229 17:52:55.444128   11452 command_runner.go:130] > ca.key
	I0229 17:52:55.444128   11452 command_runner.go:130] > healthcheck-client.crt
	I0229 17:52:55.444128   11452 command_runner.go:130] > healthcheck-client.key
	I0229 17:52:55.444128   11452 command_runner.go:130] > peer.crt
	I0229 17:52:55.444128   11452 command_runner.go:130] > peer.key
	I0229 17:52:55.444128   11452 command_runner.go:130] > server.crt
	I0229 17:52:55.444128   11452 command_runner.go:130] > server.key
	I0229 17:52:55.456722   11452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 17:52:55.469463   11452 command_runner.go:130] > Certificate will not expire
	I0229 17:52:55.483372   11452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 17:52:55.497570   11452 command_runner.go:130] > Certificate will not expire
	I0229 17:52:55.506622   11452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 17:52:55.521721   11452 command_runner.go:130] > Certificate will not expire
	I0229 17:52:55.534785   11452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 17:52:55.548141   11452 command_runner.go:130] > Certificate will not expire
	I0229 17:52:55.561369   11452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 17:52:55.576979   11452 command_runner.go:130] > Certificate will not expire
	I0229 17:52:55.587237   11452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 17:52:55.602302   11452 command_runner.go:130] > Certificate will not expire
	I0229 17:52:55.602378   11452 kubeadm.go:404] StartCluster: {Name:functional-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-686300 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:52:55.611845   11452 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 17:52:55.661648   11452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 17:52:55.681347   11452 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0229 17:52:55.682445   11452 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0229 17:52:55.682445   11452 command_runner.go:130] > /var/lib/minikube/etcd:
	I0229 17:52:55.682445   11452 command_runner.go:130] > member
	I0229 17:52:55.682538   11452 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 17:52:55.682631   11452 kubeadm.go:636] restartCluster start
	I0229 17:52:55.693389   11452 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 17:52:55.711181   11452 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 17:52:55.719580   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-686300
	I0229 17:52:55.905580   11452 kubeconfig.go:92] found "functional-686300" server: "https://127.0.0.1:57189"
	I0229 17:52:55.906365   11452 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 17:52:55.907583   11452 kapi.go:59] client config for functional-686300: &rest.Config{Host:"https://127.0.0.1:57189", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-686300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-686300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1dd0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 17:52:55.908764   11452 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 17:52:55.919715   11452 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 17:52:55.939059   11452 api_server.go:166] Checking apiserver status ...
	I0229 17:52:55.949297   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 17:52:55.965554   11452 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 17:52:56.444463   11452 api_server.go:166] Checking apiserver status ...
	I0229 17:52:56.458502   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 17:52:56.481916   11452 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 17:52:56.945602   11452 api_server.go:166] Checking apiserver status ...
	I0229 17:52:56.958981   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 17:52:57.029177   11452 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 17:52:57.444664   11452 api_server.go:166] Checking apiserver status ...
	I0229 17:52:57.454769   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 17:52:57.604556   11452 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 17:52:57.960650   11452 api_server.go:166] Checking apiserver status ...
	I0229 17:52:57.970910   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 17:52:58.106799   11452 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 17:52:58.442869   11452 api_server.go:166] Checking apiserver status ...
	I0229 17:52:58.456258   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 17:52:58.532842   11452 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 17:52:58.954852   11452 api_server.go:166] Checking apiserver status ...
	I0229 17:52:58.965353   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 17:52:59.105494   11452 command_runner.go:130] > 5864
	I0229 17:52:59.123227   11452 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5864/cgroup
	I0229 17:52:59.221971   11452 command_runner.go:130] > 21:freezer:/docker/8b432bb7d7db19571429b1626acfbf36168a27b085d3d52d265ee8a3e053d09a/kubepods/burstable/poded92707ba3cc43df1828f1fa1c3c48a1/4f44b187363330ad1e652f7a4ba925ed5e6c0259c0b165ca6932c24ee8c3c1d0
	I0229 17:52:59.223246   11452 api_server.go:182] apiserver freezer: "21:freezer:/docker/8b432bb7d7db19571429b1626acfbf36168a27b085d3d52d265ee8a3e053d09a/kubepods/burstable/poded92707ba3cc43df1828f1fa1c3c48a1/4f44b187363330ad1e652f7a4ba925ed5e6c0259c0b165ca6932c24ee8c3c1d0"
	I0229 17:52:59.236243   11452 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8b432bb7d7db19571429b1626acfbf36168a27b085d3d52d265ee8a3e053d09a/kubepods/burstable/poded92707ba3cc43df1828f1fa1c3c48a1/4f44b187363330ad1e652f7a4ba925ed5e6c0259c0b165ca6932c24ee8c3c1d0/freezer.state
	I0229 17:52:59.326154   11452 command_runner.go:130] > THAWED
	I0229 17:52:59.326154   11452 api_server.go:204] freezer state: "THAWED"
	I0229 17:52:59.326154   11452 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57189/healthz ...
	I0229 17:53:03.017656   11452 api_server.go:279] https://127.0.0.1:57189/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 17:53:03.017841   11452 retry.go:31] will retry after 225.859458ms: https://127.0.0.1:57189/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 17:53:03.247240   11452 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57189/healthz ...
	I0229 17:53:03.304670   11452 api_server.go:279] https://127.0.0.1:57189/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 17:53:03.304670   11452 retry.go:31] will retry after 381.74342ms: https://127.0.0.1:57189/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 17:53:03.692639   11452 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57189/healthz ...
	I0229 17:53:03.708713   11452 api_server.go:279] https://127.0.0.1:57189/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 17:53:03.708777   11452 retry.go:31] will retry after 405.170365ms: https://127.0.0.1:57189/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 17:53:04.128323   11452 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57189/healthz ...
	I0229 17:53:04.143619   11452 api_server.go:279] https://127.0.0.1:57189/healthz returned 200:
	ok
	I0229 17:53:04.144842   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods
	I0229 17:53:04.144920   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:04.144993   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:04.144993   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:04.169235   11452 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0229 17:53:04.169235   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:04.169235   11452 round_trippers.go:580]     Audit-Id: ca88c0da-50c5-414e-960f-227cfa60759b
	I0229 17:53:04.169235   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:04.169235   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:04.169354   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:04.169354   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:04.169354   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:04 GMT
	I0229 17:53:04.170497   11452 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"470"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50669 chars]
	I0229 17:53:04.176938   11452 system_pods.go:86] 7 kube-system pods found
	I0229 17:53:04.176938   11452 system_pods.go:89] "coredns-5dd5756b68-ntlsp" [719cbf17-fd3a-4d95-8ca8-ce844e40cd09] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 17:53:04.176938   11452 system_pods.go:89] "etcd-functional-686300" [cec45e60-81f4-4664-af69-afc8eb5ebb1c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 17:53:04.176938   11452 system_pods.go:89] "kube-apiserver-functional-686300" [badf4941-60df-4d0f-a1b2-6ce4d4c07903] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 17:53:04.176938   11452 system_pods.go:89] "kube-controller-manager-functional-686300" [db554828-2110-4b08-aa00-3d21b8358f00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 17:53:04.176938   11452 system_pods.go:89] "kube-proxy-qhmp2" [5ce4d53d-2d91-44f7-acd5-d5dc6345b62a] Running
	I0229 17:53:04.176938   11452 system_pods.go:89] "kube-scheduler-functional-686300" [a4ac9ed9-a91e-4b2f-92ab-929b0ea21b39] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 17:53:04.176938   11452 system_pods.go:89] "storage-provisioner" [e194a7c7-6279-4253-9460-70ca0f741a14] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 17:53:04.177576   11452 round_trippers.go:463] GET https://127.0.0.1:57189/version
	I0229 17:53:04.177576   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:04.177576   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:04.177576   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:04.181779   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 17:53:04.181869   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:04.181869   11452 round_trippers.go:580]     Audit-Id: 312f554b-2c58-4c1a-a613-4df5a8833c64
	I0229 17:53:04.181869   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:04.181900   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:04.181900   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:04.181900   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:04.181900   11452 round_trippers.go:580]     Content-Length: 264
	I0229 17:53:04.181900   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:04 GMT
	I0229 17:53:04.181939   11452 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 17:53:04.182093   11452 api_server.go:141] control plane version: v1.28.4
	I0229 17:53:04.182124   11452 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0229 17:53:04.182178   11452 kubeadm.go:684] Taking a shortcut, as the cluster seems to be properly configured
	I0229 17:53:04.182243   11452 kubeadm.go:640] restartCluster took 8.4994709s
	I0229 17:53:04.182268   11452 kubeadm.go:406] StartCluster complete in 8.5798277s
	I0229 17:53:04.182293   11452 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:53:04.182293   11452 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 17:53:04.183489   11452 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:53:04.185167   11452 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 17:53:04.185259   11452 addons.go:69] Setting storage-provisioner=true in profile "functional-686300"
	I0229 17:53:04.185331   11452 addons.go:234] Setting addon storage-provisioner=true in "functional-686300"
	I0229 17:53:04.185331   11452 addons.go:69] Setting default-storageclass=true in profile "functional-686300"
	I0229 17:53:04.185331   11452 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-686300"
	W0229 17:53:04.185331   11452 addons.go:243] addon storage-provisioner should already be in state true
	I0229 17:53:04.185331   11452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 17:53:04.185331   11452 host.go:66] Checking if "functional-686300" exists ...
	I0229 17:53:04.185331   11452 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 17:53:04.197382   11452 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 17:53:04.197968   11452 kapi.go:59] client config for functional-686300: &rest.Config{Host:"https://127.0.0.1:57189", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-686300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-686300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1dd0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 17:53:04.200346   11452 round_trippers.go:463] GET https://127.0.0.1:57189/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 17:53:04.200422   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:04.200489   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:04.200489   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:04.208744   11452 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 17:53:04.208744   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:04.208744   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:04.208744   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:04.208744   11452 round_trippers.go:580]     Content-Length: 291
	I0229 17:53:04.208744   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:04 GMT
	I0229 17:53:04.208744   11452 round_trippers.go:580]     Audit-Id: 0a39becb-99ec-49cf-9d82-d907b14697a3
	I0229 17:53:04.208744   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:04.208744   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:04.208744   11452 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9084b165-aa05-4304-bdbf-328126f3cd93","resourceVersion":"443","creationTimestamp":"2024-02-29T17:51:58Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 17:53:04.208744   11452 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-686300" context rescaled to 1 replicas
	I0229 17:53:04.209287   11452 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 17:53:04.212942   11452 out.go:177] * Verifying Kubernetes components...
	I0229 17:53:04.212095   11452 cli_runner.go:164] Run: docker container inspect functional-686300 --format={{.State.Status}}
	I0229 17:53:04.212095   11452 cli_runner.go:164] Run: docker container inspect functional-686300 --format={{.State.Status}}
	I0229 17:53:04.229525   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 17:53:04.345846   11452 command_runner.go:130] > apiVersion: v1
	I0229 17:53:04.345846   11452 command_runner.go:130] > data:
	I0229 17:53:04.345846   11452 command_runner.go:130] >   Corefile: |
	I0229 17:53:04.345846   11452 command_runner.go:130] >     .:53 {
	I0229 17:53:04.345846   11452 command_runner.go:130] >         log
	I0229 17:53:04.345846   11452 command_runner.go:130] >         errors
	I0229 17:53:04.345846   11452 command_runner.go:130] >         health {
	I0229 17:53:04.345846   11452 command_runner.go:130] >            lameduck 5s
	I0229 17:53:04.345846   11452 command_runner.go:130] >         }
	I0229 17:53:04.345846   11452 command_runner.go:130] >         ready
	I0229 17:53:04.345846   11452 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0229 17:53:04.345846   11452 command_runner.go:130] >            pods insecure
	I0229 17:53:04.345846   11452 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0229 17:53:04.345846   11452 command_runner.go:130] >            ttl 30
	I0229 17:53:04.345846   11452 command_runner.go:130] >         }
	I0229 17:53:04.345846   11452 command_runner.go:130] >         prometheus :9153
	I0229 17:53:04.345846   11452 command_runner.go:130] >         hosts {
	I0229 17:53:04.346392   11452 command_runner.go:130] >            192.168.65.254 host.minikube.internal
	I0229 17:53:04.346392   11452 command_runner.go:130] >            fallthrough
	I0229 17:53:04.346392   11452 command_runner.go:130] >         }
	I0229 17:53:04.346392   11452 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0229 17:53:04.346392   11452 command_runner.go:130] >            max_concurrent 1000
	I0229 17:53:04.346392   11452 command_runner.go:130] >         }
	I0229 17:53:04.346392   11452 command_runner.go:130] >         cache 30
	I0229 17:53:04.346530   11452 command_runner.go:130] >         loop
	I0229 17:53:04.346530   11452 command_runner.go:130] >         reload
	I0229 17:53:04.346530   11452 command_runner.go:130] >         loadbalance
	I0229 17:53:04.346530   11452 command_runner.go:130] >     }
	I0229 17:53:04.346588   11452 command_runner.go:130] > kind: ConfigMap
	I0229 17:53:04.346588   11452 command_runner.go:130] > metadata:
	I0229 17:53:04.346711   11452 command_runner.go:130] >   creationTimestamp: "2024-02-29T17:51:58Z"
	I0229 17:53:04.346754   11452 command_runner.go:130] >   name: coredns
	I0229 17:53:04.346754   11452 command_runner.go:130] >   namespace: kube-system
	I0229 17:53:04.346754   11452 command_runner.go:130] >   resourceVersion: "396"
	I0229 17:53:04.346754   11452 command_runner.go:130] >   uid: c23b620e-93a3-4d62-8152-f0ebddd9a690
	I0229 17:53:04.347016   11452 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 17:53:04.358780   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-686300
	I0229 17:53:04.404644   11452 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:53:04.406977   11452 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 17:53:04.406977   11452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 17:53:04.415295   11452 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 17:53:04.416026   11452 kapi.go:59] client config for functional-686300: &rest.Config{Host:"https://127.0.0.1:57189", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-686300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\functional-686300\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1dd0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 17:53:04.417128   11452 addons.go:234] Setting addon default-storageclass=true in "functional-686300"
	W0229 17:53:04.417175   11452 addons.go:243] addon default-storageclass should already be in state true
	I0229 17:53:04.417297   11452 host.go:66] Checking if "functional-686300" exists ...
	I0229 17:53:04.422667   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
	I0229 17:53:04.436413   11452 cli_runner.go:164] Run: docker container inspect functional-686300 --format={{.State.Status}}
	I0229 17:53:04.552100   11452 node_ready.go:35] waiting up to 6m0s for node "functional-686300" to be "Ready" ...
	I0229 17:53:04.552629   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:04.552629   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:04.552693   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:04.552693   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:04.568184   11452 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0229 17:53:04.568236   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:04.568236   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:04.568236   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:04.568236   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:04.568236   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:04.568296   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:04 GMT
	I0229 17:53:04.568296   11452 round_trippers.go:580]     Audit-Id: 190b9051-6129-499e-96bd-6e8d47de47f2
	I0229 17:53:04.569629   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:04.570856   11452 node_ready.go:49] node "functional-686300" has status "Ready":"True"
	I0229 17:53:04.570986   11452 node_ready.go:38] duration metric: took 18.7556ms waiting for node "functional-686300" to be "Ready" ...
	I0229 17:53:04.571021   11452 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 17:53:04.571163   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods
	I0229 17:53:04.571163   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:04.571163   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:04.571163   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:04.582462   11452 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0229 17:53:04.582462   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:04.582462   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:04.582462   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:04 GMT
	I0229 17:53:04.582462   11452 round_trippers.go:580]     Audit-Id: 57e607a7-b7b1-456e-908d-e7757eaa7242
	I0229 17:53:04.582462   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:04.582462   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:04.582462   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:04.588576   11452 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"470"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50669 chars]
	I0229 17:53:04.592641   11452 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ntlsp" in "kube-system" namespace to be "Ready" ...
	I0229 17:53:04.592879   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:04.592937   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:04.592937   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:04.592937   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:04.602821   11452 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0229 17:53:04.603393   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:04.603393   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:04.603393   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:04 GMT
	I0229 17:53:04.603393   11452 round_trippers.go:580]     Audit-Id: 441a08eb-1394-433f-b8f2-3d3930b9206a
	I0229 17:53:04.603458   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:04.603458   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:04.603535   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:04.603726   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:04.605375   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:04.605375   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:04.605375   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:04.605375   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:04.611958   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:04.611958   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:04.612504   11452 round_trippers.go:580]     Audit-Id: 4242ca07-bb4c-4278-b3b3-41027a7a7e3c
	I0229 17:53:04.612504   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:04.612504   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:04.612550   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:04.612550   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:04.612550   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:04 GMT
	I0229 17:53:04.612751   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:04.652168   11452 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 17:53:04.652168   11452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 17:53:04.660981   11452 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
	I0229 17:53:04.667880   11452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57190 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-686300\id_rsa Username:docker}
	I0229 17:53:04.843174   11452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 17:53:04.844923   11452 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57190 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-686300\id_rsa Username:docker}
	I0229 17:53:05.052146   11452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 17:53:05.095142   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:05.095142   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:05.095142   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:05.095142   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:05.106906   11452 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0229 17:53:05.106906   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:05.106906   11452 round_trippers.go:580]     Audit-Id: fe1c0bf7-7e32-4f1b-8455-21e0b332b578
	I0229 17:53:05.106906   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:05.106906   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:05.106906   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:05.106906   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:05.106906   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:05 GMT
	I0229 17:53:05.107622   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:05.107807   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:05.107807   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:05.107807   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:05.107807   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:05.116586   11452 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 17:53:05.116586   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:05.116586   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:05.116586   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:05 GMT
	I0229 17:53:05.116586   11452 round_trippers.go:580]     Audit-Id: d33a39bd-4ba5-4183-9548-24f28aaf1601
	I0229 17:53:05.116586   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:05.116586   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:05.116586   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:05.117471   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:05.594023   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:05.594023   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:05.594023   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:05.594023   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:05.608261   11452 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0229 17:53:05.608261   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:05.608261   11452 round_trippers.go:580]     Audit-Id: eb16cca8-100a-4c79-9fe4-f5a03e6c44ca
	I0229 17:53:05.608261   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:05.608261   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:05.608261   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:05.608261   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:05.608261   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:05 GMT
	I0229 17:53:05.609180   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:05.609947   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:05.609947   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:05.609947   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:05.609947   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:05.618504   11452 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 17:53:05.618504   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:05.618504   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:05.618504   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:05.618504   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:05 GMT
	I0229 17:53:05.618504   11452 round_trippers.go:580]     Audit-Id: dd55315a-063f-48fc-a831-caa78f7be23a
	I0229 17:53:05.618504   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:05.618504   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:05.618504   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:06.081450   11452 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0229 17:53:06.081450   11452 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0229 17:53:06.081450   11452 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0229 17:53:06.081450   11452 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0229 17:53:06.081450   11452 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0229 17:53:06.081450   11452 command_runner.go:130] > pod/storage-provisioner configured
	I0229 17:53:06.081450   11452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2381728s)
	I0229 17:53:06.081450   11452 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0229 17:53:06.081980   11452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.0298262s)
	I0229 17:53:06.082164   11452 round_trippers.go:463] GET https://127.0.0.1:57189/apis/storage.k8s.io/v1/storageclasses
	I0229 17:53:06.082164   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:06.082164   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:06.082292   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:06.085267   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 17:53:06.085267   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:06.085267   11452 round_trippers.go:580]     Audit-Id: 0e92c3fa-36d0-4d26-9f32-70855e266f66
	I0229 17:53:06.085267   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:06.085267   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:06.085267   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:06.085267   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:06.085267   11452 round_trippers.go:580]     Content-Length: 1273
	I0229 17:53:06.085267   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:06 GMT
	I0229 17:53:06.087813   11452 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"470"},"items":[{"metadata":{"name":"standard","uid":"10dc6567-d627-4cc0-902a-cda82f95467e","resourceVersion":"394","creationTimestamp":"2024-02-29T17:52:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T17:52:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0229 17:53:06.088069   11452 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"10dc6567-d627-4cc0-902a-cda82f95467e","resourceVersion":"394","creationTimestamp":"2024-02-29T17:52:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T17:52:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0229 17:53:06.088620   11452 round_trippers.go:463] PUT https://127.0.0.1:57189/apis/storage.k8s.io/v1/storageclasses/standard
	I0229 17:53:06.088712   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:06.088712   11452 round_trippers.go:473]     Content-Type: application/json
	I0229 17:53:06.088712   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:06.088712   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:06.093739   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:06.093817   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:06.093817   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:06.093817   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:06.096186   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:06.096186   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:06.096186   11452 round_trippers.go:580]     Audit-Id: 267892bd-3a42-41c0-9ba5-e6792294323c
	I0229 17:53:06.096186   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:06.096186   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:06.096186   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:06.096186   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:06.096186   11452 round_trippers.go:580]     Content-Length: 1220
	I0229 17:53:06.096186   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:06 GMT
	I0229 17:53:06.096186   11452 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"10dc6567-d627-4cc0-902a-cda82f95467e","resourceVersion":"394","creationTimestamp":"2024-02-29T17:52:14Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T17:52:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0229 17:53:06.104638   11452 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 17:53:06.112740   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:06.112740   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:06 GMT
	I0229 17:53:06.112740   11452 round_trippers.go:580]     Audit-Id: ab4fd17e-32d8-4d9f-85e4-bc5754054ffa
	I0229 17:53:06.112740   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:06.112740   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:06.112740   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:06.112740   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:06.113034   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:06.112496   11452 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 17:53:06.115595   11452 addons.go:505] enable addons completed in 1.9304441s: enabled=[storage-provisioner default-storageclass]
	I0229 17:53:06.114326   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:06.115595   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:06.115595   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:06.115595   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:06.121003   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 17:53:06.121083   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:06.121083   11452 round_trippers.go:580]     Audit-Id: fe474d20-b30f-4e5f-96c5-470d489447d6
	I0229 17:53:06.121083   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:06.121137   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:06.121137   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:06.121137   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:06.121165   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:06 GMT
	I0229 17:53:06.121184   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:06.606237   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:06.606441   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:06.606441   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:06.606441   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:06.612753   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:06.612810   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:06.612854   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:06.612880   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:06 GMT
	I0229 17:53:06.612893   11452 round_trippers.go:580]     Audit-Id: e4b00181-1331-4e41-ba31-723b3949a3bf
	I0229 17:53:06.612909   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:06.612909   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:06.612909   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:06.612909   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:06.613924   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:06.613985   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:06.614016   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:06.614016   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:06.621125   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:06.621125   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:06.621125   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:06.621125   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:06.621125   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:06.621125   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:06 GMT
	I0229 17:53:06.621125   11452 round_trippers.go:580]     Audit-Id: 903c9903-1b61-48b0-be02-660a619abc4f
	I0229 17:53:06.621125   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:06.621125   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:06.621745   11452 pod_ready.go:102] pod "coredns-5dd5756b68-ntlsp" in "kube-system" namespace has status "Ready":"False"
	I0229 17:53:07.104649   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:07.104649   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:07.104649   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:07.104649   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:07.113447   11452 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 17:53:07.113447   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:07.113447   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:07 GMT
	I0229 17:53:07.113551   11452 round_trippers.go:580]     Audit-Id: a52acbe7-0966-4c8a-830c-3afa91d15a22
	I0229 17:53:07.113551   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:07.113551   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:07.113551   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:07.113551   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:07.113917   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:07.114809   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:07.114809   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:07.114809   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:07.114809   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:07.126810   11452 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0229 17:53:07.126954   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:07.126954   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:07.126954   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:07 GMT
	I0229 17:53:07.126954   11452 round_trippers.go:580]     Audit-Id: cc827d77-e773-4d85-9eaf-062b18b908d3
	I0229 17:53:07.127008   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:07.127008   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:07.127008   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:07.127054   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:07.603441   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:07.603441   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:07.603441   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:07.603441   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:07.611060   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:07.611123   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:07.611123   11452 round_trippers.go:580]     Audit-Id: 5d038a4f-dfe7-4c26-88c6-6c79f9525e02
	I0229 17:53:07.611123   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:07.611182   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:07.611182   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:07.611203   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:07.611203   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:07 GMT
	I0229 17:53:07.611706   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:07.612144   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:07.612144   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:07.612144   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:07.612144   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:07.618243   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:07.618243   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:07.618243   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:07.618243   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:07.618243   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:07.618243   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:07.618243   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:07 GMT
	I0229 17:53:07.618243   11452 round_trippers.go:580]     Audit-Id: 6bc4128e-aa53-48af-8a33-36a2a4fefbfb
	I0229 17:53:07.618924   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:08.107701   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:08.107962   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:08.107962   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:08.107962   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:08.115040   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:08.115159   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:08.115159   11452 round_trippers.go:580]     Audit-Id: 590667be-aede-4d0a-80e1-a728135b390d
	I0229 17:53:08.115194   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:08.115194   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:08.115194   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:08.115194   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:08.115194   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:08 GMT
	I0229 17:53:08.115612   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:08.115956   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:08.115956   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:08.115956   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:08.115956   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:08.122554   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:08.122554   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:08.122554   11452 round_trippers.go:580]     Audit-Id: d68f13f3-4b2c-40f4-955c-913b447cdc57
	I0229 17:53:08.122554   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:08.122554   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:08.122554   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:08.122554   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:08.122554   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:08 GMT
	I0229 17:53:08.123193   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:08.593205   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:08.593322   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:08.593322   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:08.593433   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:08.599377   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 17:53:08.599377   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:08.599377   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:08.599377   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:08.599377   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:08 GMT
	I0229 17:53:08.599377   11452 round_trippers.go:580]     Audit-Id: a480d882-297c-4d4a-8c23-bc42c9d9f043
	I0229 17:53:08.599377   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:08.599377   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:08.600490   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:08.600904   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:08.600904   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:08.600904   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:08.600904   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:08.607497   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:08.607497   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:08.607497   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:08.607497   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:08.607497   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:08.607497   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:08.607497   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:08 GMT
	I0229 17:53:08.607497   11452 round_trippers.go:580]     Audit-Id: 87d0b324-66c7-4c53-b614-0551f69d1052
	I0229 17:53:08.607497   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:09.100614   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:09.100914   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:09.100914   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:09.100914   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:09.107650   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:09.107761   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:09.107761   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:09.107761   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:09.107761   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:09.107883   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:09 GMT
	I0229 17:53:09.107941   11452 round_trippers.go:580]     Audit-Id: eaa6188e-e09c-4495-b977-061aa937a948
	I0229 17:53:09.107941   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:09.108364   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:09.109529   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:09.109529   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:09.109529   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:09.109529   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:09.117338   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:09.117492   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:09.117492   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:09 GMT
	I0229 17:53:09.117492   11452 round_trippers.go:580]     Audit-Id: a9d587d4-c45f-42dc-b987-f58a4b33414d
	I0229 17:53:09.117492   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:09.117492   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:09.117492   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:09.117492   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:09.117748   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:09.118137   11452 pod_ready.go:102] pod "coredns-5dd5756b68-ntlsp" in "kube-system" namespace has status "Ready":"False"
	I0229 17:53:09.611259   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:09.611259   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:09.611337   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:09.611337   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:09.617581   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:09.617581   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:09.617581   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:09 GMT
	I0229 17:53:09.617581   11452 round_trippers.go:580]     Audit-Id: a9d00fcc-b2c4-4978-b728-59edda53f4fa
	I0229 17:53:09.617581   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:09.617581   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:09.617581   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:09.617581   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:09.618177   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:09.618775   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:09.618775   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:09.618775   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:09.618775   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:09.625981   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:09.625981   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:09.625981   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:09.625981   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:09 GMT
	I0229 17:53:09.625981   11452 round_trippers.go:580]     Audit-Id: 2892d5b6-2069-4bea-9e71-36ec26da7662
	I0229 17:53:09.625981   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:09.625981   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:09.625981   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:09.627249   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:10.105101   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:10.105101   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:10.105101   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:10.105101   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:10.111815   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:10.111815   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:10.111815   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:10.111815   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:10.111815   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:10 GMT
	I0229 17:53:10.111815   11452 round_trippers.go:580]     Audit-Id: 9c0313b6-1ca0-4157-bcf7-323212a2e70f
	I0229 17:53:10.111815   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:10.111815   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:10.112577   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:10.113210   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:10.113210   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:10.113210   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:10.113210   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:10.121799   11452 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 17:53:10.121799   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:10.121799   11452 round_trippers.go:580]     Audit-Id: 7a69c3a8-e2a7-49ad-bffb-a87536a2b3de
	I0229 17:53:10.121799   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:10.121799   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:10.121799   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:10.121799   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:10.121799   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:10 GMT
	I0229 17:53:10.121799   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:10.602133   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:10.602221   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:10.602221   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:10.602311   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:10.608908   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:10.608982   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:10.608982   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:10 GMT
	I0229 17:53:10.608982   11452 round_trippers.go:580]     Audit-Id: 53eead0e-c4f2-49b7-925b-ac14c898b12c
	I0229 17:53:10.608982   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:10.609056   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:10.609056   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:10.609076   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:10.609394   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:10.610020   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:10.610020   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:10.610020   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:10.610020   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:10.617270   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:10.617270   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:10.617270   11452 round_trippers.go:580]     Audit-Id: f0856580-7f9f-4a3e-aaec-edee3963ee52
	I0229 17:53:10.617270   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:10.617270   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:10.617270   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:10.617270   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:10.617270   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:10 GMT
	I0229 17:53:10.617270   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:11.107920   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:11.107997   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:11.107997   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:11.107997   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:11.115729   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:11.115792   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:11.115792   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:11.115792   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:11.115792   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:11.115792   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:11 GMT
	I0229 17:53:11.115792   11452 round_trippers.go:580]     Audit-Id: 07422ce5-916b-4c20-9c71-a5536ea10ad1
	I0229 17:53:11.115792   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:11.116078   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:11.116381   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:11.116381   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:11.116381   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:11.116381   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:11.126414   11452 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0229 17:53:11.126447   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:11.126447   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:11.126447   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:11.126524   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:11 GMT
	I0229 17:53:11.126524   11452 round_trippers.go:580]     Audit-Id: d3e419fd-540f-4cef-862c-8567a0e2bd8f
	I0229 17:53:11.126572   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:11.126572   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:11.126885   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:11.127414   11452 pod_ready.go:102] pod "coredns-5dd5756b68-ntlsp" in "kube-system" namespace has status "Ready":"False"
	I0229 17:53:11.607859   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:11.607859   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:11.607859   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:11.607859   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:11.614800   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:11.614942   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:11.614942   11452 round_trippers.go:580]     Audit-Id: e3729127-94fb-4e57-9a41-76deb42e7d22
	I0229 17:53:11.614942   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:11.614942   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:11.614942   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:11.614999   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:11.614999   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:11 GMT
	I0229 17:53:11.615243   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:11.615998   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:11.616068   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:11.616068   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:11.616068   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:11.624020   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:11.624020   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:11.624020   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:11.624020   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:11.624020   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:11 GMT
	I0229 17:53:11.624020   11452 round_trippers.go:580]     Audit-Id: 346f9e38-b4a6-4426-9478-86f7569cfba5
	I0229 17:53:11.624020   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:11.624020   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:11.624276   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:12.107847   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:12.107847   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:12.107847   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:12.107847   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:12.114652   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:12.114652   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:12.114652   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:12.114779   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:12.114779   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:12.114779   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:12.114779   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:12 GMT
	I0229 17:53:12.114779   11452 round_trippers.go:580]     Audit-Id: 3d6d700f-f3d8-49e0-af97-42d146dc724a
	I0229 17:53:12.115069   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"465","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6174 chars]
	I0229 17:53:12.115368   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:12.115368   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:12.115368   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:12.115368   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:12.120924   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 17:53:12.120924   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:12.120924   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:12.120924   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:12.120924   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:12.120924   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:12 GMT
	I0229 17:53:12.120924   11452 round_trippers.go:580]     Audit-Id: f752d395-55c4-4af0-8164-0411c1278df4
	I0229 17:53:12.120924   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:12.120924   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:12.606200   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp
	I0229 17:53:12.606200   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:12.606200   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:12.606200   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:12.615698   11452 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0229 17:53:12.615698   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:12.615698   11452 round_trippers.go:580]     Audit-Id: 4c6697fb-efa2-4ae2-8f43-00eb747cca6c
	I0229 17:53:12.615698   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:12.615698   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:12.615698   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:12.615698   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:12.615698   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:12 GMT
	I0229 17:53:12.615698   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"530","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5945 chars]
	I0229 17:53:12.616773   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:12.616820   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:12.616862   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:12.616862   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:12.624374   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:12.624374   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:12.624374   11452 round_trippers.go:580]     Audit-Id: aab13e57-dc3c-45a2-baf9-f27a6736c5f4
	I0229 17:53:12.624374   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:12.624374   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:12.624374   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:12.624374   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:12.624374   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:12 GMT
	I0229 17:53:12.625163   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:12.625163   11452 pod_ready.go:92] pod "coredns-5dd5756b68-ntlsp" in "kube-system" namespace has status "Ready":"True"
	I0229 17:53:12.625163   11452 pod_ready.go:81] duration metric: took 8.0324635s waiting for pod "coredns-5dd5756b68-ntlsp" in "kube-system" namespace to be "Ready" ...
	I0229 17:53:12.625731   11452 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-686300" in "kube-system" namespace to be "Ready" ...
	I0229 17:53:12.625847   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/etcd-functional-686300
	I0229 17:53:12.625876   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:12.625876   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:12.625876   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:12.629065   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 17:53:12.629065   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:12.630973   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:12.630973   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:12.630973   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:12.630973   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:12 GMT
	I0229 17:53:12.630973   11452 round_trippers.go:580]     Audit-Id: 2866e734-0a45-48fd-8a7e-4ebc1aa930fe
	I0229 17:53:12.630973   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:12.631105   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-686300","namespace":"kube-system","uid":"cec45e60-81f4-4664-af69-afc8eb5ebb1c","resourceVersion":"466","creationTimestamp":"2024-02-29T17:51:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"67c8bf0ab7634b3488482d1ffc5ab864","kubernetes.io/config.mirror":"67c8bf0ab7634b3488482d1ffc5ab864","kubernetes.io/config.seen":"2024-02-29T17:51:58.976982498Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I0229 17:53:12.631638   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:12.631638   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:12.631638   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:12.631638   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:12.638190   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:12.638233   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:12.638233   11452 round_trippers.go:580]     Audit-Id: c996d4ab-939d-4d71-8899-2133eb227f03
	I0229 17:53:12.638233   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:12.638233   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:12.638233   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:12.638233   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:12.638233   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:12 GMT
	I0229 17:53:12.638499   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:13.137232   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/etcd-functional-686300
	I0229 17:53:13.137232   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:13.137312   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:13.137312   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:13.144921   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:13.144982   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:13.144982   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:13.144982   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:13 GMT
	I0229 17:53:13.144982   11452 round_trippers.go:580]     Audit-Id: 8588fd1d-e329-4a1e-9b98-048e0c7a0a2e
	I0229 17:53:13.144982   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:13.145025   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:13.145025   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:13.145383   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-686300","namespace":"kube-system","uid":"cec45e60-81f4-4664-af69-afc8eb5ebb1c","resourceVersion":"466","creationTimestamp":"2024-02-29T17:51:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"67c8bf0ab7634b3488482d1ffc5ab864","kubernetes.io/config.mirror":"67c8bf0ab7634b3488482d1ffc5ab864","kubernetes.io/config.seen":"2024-02-29T17:51:58.976982498Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I0229 17:53:13.145541   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:13.146080   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:13.146080   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:13.146080   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:13.163390   11452 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0229 17:53:13.163484   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:13.163484   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:13 GMT
	I0229 17:53:13.163554   11452 round_trippers.go:580]     Audit-Id: a4aae3ef-b2f3-4267-9663-667020acf770
	I0229 17:53:13.163554   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:13.163586   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:13.163586   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:13.163586   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:13.163909   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:13.637577   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/etcd-functional-686300
	I0229 17:53:13.637799   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:13.637799   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:13.637799   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:13.646500   11452 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 17:53:13.646573   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:13.646573   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:13 GMT
	I0229 17:53:13.646573   11452 round_trippers.go:580]     Audit-Id: 6620bbb1-f8d6-4770-bd1d-8e1deafb526a
	I0229 17:53:13.646573   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:13.646573   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:13.646655   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:13.646655   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:13.646808   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-686300","namespace":"kube-system","uid":"cec45e60-81f4-4664-af69-afc8eb5ebb1c","resourceVersion":"466","creationTimestamp":"2024-02-29T17:51:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"67c8bf0ab7634b3488482d1ffc5ab864","kubernetes.io/config.mirror":"67c8bf0ab7634b3488482d1ffc5ab864","kubernetes.io/config.seen":"2024-02-29T17:51:58.976982498Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I0229 17:53:13.647544   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:13.647544   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:13.647618   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:13.647618   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:13.653365   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 17:53:13.653365   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:13.653365   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:13.653365   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:13.653365   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:13 GMT
	I0229 17:53:13.653365   11452 round_trippers.go:580]     Audit-Id: 5ec26179-a2bf-4749-b48b-c8ab84701d92
	I0229 17:53:13.653365   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:13.653365   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:13.653964   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:14.139413   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/etcd-functional-686300
	I0229 17:53:14.139517   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:14.139517   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:14.139517   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:14.145705   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:14.145754   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:14.145767   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:14.145767   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:14.145794   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:14.145794   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:14.145794   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:14 GMT
	I0229 17:53:14.145815   11452 round_trippers.go:580]     Audit-Id: 3151511c-ab96-4e7b-be86-e640a1ae5b22
	I0229 17:53:14.145940   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-686300","namespace":"kube-system","uid":"cec45e60-81f4-4664-af69-afc8eb5ebb1c","resourceVersion":"466","creationTimestamp":"2024-02-29T17:51:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"67c8bf0ab7634b3488482d1ffc5ab864","kubernetes.io/config.mirror":"67c8bf0ab7634b3488482d1ffc5ab864","kubernetes.io/config.seen":"2024-02-29T17:51:58.976982498Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I0229 17:53:14.146530   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:14.146530   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:14.146530   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:14.146530   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:14.152094   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 17:53:14.152094   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:14.152094   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:14.152094   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:14 GMT
	I0229 17:53:14.152094   11452 round_trippers.go:580]     Audit-Id: b3546c3c-844c-48f4-8946-e3419d195937
	I0229 17:53:14.152094   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:14.152094   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:14.152094   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:14.152094   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:14.638230   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/etcd-functional-686300
	I0229 17:53:14.638230   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:14.638312   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:14.638312   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:14.644286   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 17:53:14.644411   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:14.644411   11452 round_trippers.go:580]     Audit-Id: e30f99fc-2de1-4916-a09b-272cea27744e
	I0229 17:53:14.644411   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:14.644411   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:14.644411   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:14.644411   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:14.644411   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:14 GMT
	I0229 17:53:14.644466   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-686300","namespace":"kube-system","uid":"cec45e60-81f4-4664-af69-afc8eb5ebb1c","resourceVersion":"466","creationTimestamp":"2024-02-29T17:51:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"67c8bf0ab7634b3488482d1ffc5ab864","kubernetes.io/config.mirror":"67c8bf0ab7634b3488482d1ffc5ab864","kubernetes.io/config.seen":"2024-02-29T17:51:58.976982498Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I0229 17:53:14.645100   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:14.645140   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:14.645140   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:14.645168   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:14.652610   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:14.652610   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:14.652610   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:14.652610   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:14.652610   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:14.652610   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:14.652610   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:14 GMT
	I0229 17:53:14.652610   11452 round_trippers.go:580]     Audit-Id: 4cf2b695-05f3-4675-b8d8-380617540f06
	I0229 17:53:14.653314   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:14.653314   11452 pod_ready.go:102] pod "etcd-functional-686300" in "kube-system" namespace has status "Ready":"False"
	I0229 17:53:15.138390   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/etcd-functional-686300
	I0229 17:53:15.138466   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.138466   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.138466   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.144841   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:15.144841   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.144841   11452 round_trippers.go:580]     Audit-Id: da1dd8df-164a-4359-af8c-0718c93e8b83
	I0229 17:53:15.144841   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.144841   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.144841   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.144841   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.144841   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.144841   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-686300","namespace":"kube-system","uid":"cec45e60-81f4-4664-af69-afc8eb5ebb1c","resourceVersion":"537","creationTimestamp":"2024-02-29T17:51:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"67c8bf0ab7634b3488482d1ffc5ab864","kubernetes.io/config.mirror":"67c8bf0ab7634b3488482d1ffc5ab864","kubernetes.io/config.seen":"2024-02-29T17:51:58.976982498Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6066 chars]
	I0229 17:53:15.145558   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:15.145558   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.145558   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.145558   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.152641   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:15.152641   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.152641   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.152702   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.152702   11452 round_trippers.go:580]     Audit-Id: 8e445e04-1990-4fdb-880f-6d78a08a0a00
	I0229 17:53:15.152702   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.152702   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.152702   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.154098   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:15.155060   11452 pod_ready.go:92] pod "etcd-functional-686300" in "kube-system" namespace has status "Ready":"True"
	I0229 17:53:15.155114   11452 pod_ready.go:81] duration metric: took 2.5293649s waiting for pod "etcd-functional-686300" in "kube-system" namespace to be "Ready" ...
	I0229 17:53:15.155114   11452 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-686300" in "kube-system" namespace to be "Ready" ...
	I0229 17:53:15.155290   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-686300
	I0229 17:53:15.155353   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.155418   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.155418   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.171266   11452 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0229 17:53:15.171322   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.171322   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.171322   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.171322   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.171322   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.171418   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.171418   11452 round_trippers.go:580]     Audit-Id: 96f2150e-6fab-47d4-ae88-569e5534d090
	I0229 17:53:15.171653   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-686300","namespace":"kube-system","uid":"badf4941-60df-4d0f-a1b2-6ce4d4c07903","resourceVersion":"533","creationTimestamp":"2024-02-29T17:51:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"ed92707ba3cc43df1828f1fa1c3c48a1","kubernetes.io/config.mirror":"ed92707ba3cc43df1828f1fa1c3c48a1","kubernetes.io/config.seen":"2024-02-29T17:51:58.976986999Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8448 chars]
	I0229 17:53:15.172539   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:15.172569   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.172569   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.172569   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.179222   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:15.179222   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.179222   11452 round_trippers.go:580]     Audit-Id: a04c7c13-5eef-4807-b381-e56ba5e76742
	I0229 17:53:15.179222   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.179222   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.179222   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.179762   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.179762   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.179872   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:15.180487   11452 pod_ready.go:92] pod "kube-apiserver-functional-686300" in "kube-system" namespace has status "Ready":"True"
	I0229 17:53:15.180540   11452 pod_ready.go:81] duration metric: took 25.3703ms waiting for pod "kube-apiserver-functional-686300" in "kube-system" namespace to be "Ready" ...
	I0229 17:53:15.180578   11452 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-686300" in "kube-system" namespace to be "Ready" ...
	I0229 17:53:15.180622   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-686300
	I0229 17:53:15.180622   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.180622   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.180622   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.185577   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 17:53:15.186523   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.186601   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.186601   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.186601   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.186601   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.186601   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.186601   11452 round_trippers.go:580]     Audit-Id: 166fb41d-76c2-4cdf-b1f3-547946f6c90b
	I0229 17:53:15.186601   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-686300","namespace":"kube-system","uid":"db554828-2110-4b08-aa00-3d21b8358f00","resourceVersion":"529","creationTimestamp":"2024-02-29T17:51:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"09c959bdf04c8a95cd733fa038e93752","kubernetes.io/config.mirror":"09c959bdf04c8a95cd733fa038e93752","kubernetes.io/config.seen":"2024-02-29T17:51:58.976988299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7816 chars]
	I0229 17:53:15.187405   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:15.187405   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.187405   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.187405   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.190728   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 17:53:15.192767   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.192767   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.192767   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.192767   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.192767   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.192767   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.192767   11452 round_trippers.go:580]     Audit-Id: cfbc4441-1992-4f4e-9f14-c794b8a26e80
	I0229 17:53:15.192767   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:15.193496   11452 pod_ready.go:92] pod "kube-controller-manager-functional-686300" in "kube-system" namespace has status "Ready":"True"
	I0229 17:53:15.193496   11452 pod_ready.go:81] duration metric: took 12.9179ms waiting for pod "kube-controller-manager-functional-686300" in "kube-system" namespace to be "Ready" ...
	I0229 17:53:15.193496   11452 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qhmp2" in "kube-system" namespace to be "Ready" ...
	I0229 17:53:15.193496   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/kube-proxy-qhmp2
	I0229 17:53:15.193496   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.193496   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.193496   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.205855   11452 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0229 17:53:15.205855   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.205855   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.205855   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.205855   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.205855   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.205855   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.205855   11452 round_trippers.go:580]     Audit-Id: 1d8e1b96-72a5-4391-9479-0ab39f2c122a
	I0229 17:53:15.205855   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qhmp2","generateName":"kube-proxy-","namespace":"kube-system","uid":"5ce4d53d-2d91-44f7-acd5-d5dc6345b62a","resourceVersion":"463","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02d2180f-2f45-4992-bbab-c168a470c6fc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02d2180f-2f45-4992-bbab-c168a470c6fc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5736 chars]
	I0229 17:53:15.206579   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:15.206579   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.206579   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.206579   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.212156   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 17:53:15.212241   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.212341   11452 round_trippers.go:580]     Audit-Id: 756b5b8f-62a9-43b1-bbe2-eab118cc3162
	I0229 17:53:15.212503   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.212503   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.212637   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.212931   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.212931   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.212931   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:15.213808   11452 pod_ready.go:92] pod "kube-proxy-qhmp2" in "kube-system" namespace has status "Ready":"True"
	I0229 17:53:15.213808   11452 pod_ready.go:81] duration metric: took 20.3113ms waiting for pod "kube-proxy-qhmp2" in "kube-system" namespace to be "Ready" ...
	I0229 17:53:15.213808   11452 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-686300" in "kube-system" namespace to be "Ready" ...
	I0229 17:53:15.213808   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-686300
	I0229 17:53:15.213808   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.213808   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.213808   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.220353   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:15.220353   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.220353   11452 round_trippers.go:580]     Audit-Id: 9d569244-1699-4c5d-b69d-0df43188c803
	I0229 17:53:15.220353   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.220353   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.220353   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.220884   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.220884   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.221878   11452 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-686300","namespace":"kube-system","uid":"a4ac9ed9-a91e-4b2f-92ab-929b0ea21b39","resourceVersion":"531","creationTimestamp":"2024-02-29T17:51:59Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fb1ebaabe1c07fe1c7d63851d3665011","kubernetes.io/config.mirror":"fb1ebaabe1c07fe1c7d63851d3665011","kubernetes.io/config.seen":"2024-02-29T17:51:58.977025600Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:51:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4698 chars]
	I0229 17:53:15.221878   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes/functional-686300
	I0229 17:53:15.221878   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.221878   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.221878   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.228627   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:15.228627   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.228627   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.228627   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.228627   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.228627   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.228627   11452 round_trippers.go:580]     Audit-Id: 94754832-61c4-4da2-82b6-d5f5e4512d53
	I0229 17:53:15.228627   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.228627   11452 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:51:54Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0229 17:53:15.230373   11452 pod_ready.go:92] pod "kube-scheduler-functional-686300" in "kube-system" namespace has status "Ready":"True"
	I0229 17:53:15.230407   11452 pod_ready.go:81] duration metric: took 16.5987ms waiting for pod "kube-scheduler-functional-686300" in "kube-system" namespace to be "Ready" ...
	I0229 17:53:15.230433   11452 pod_ready.go:38] duration metric: took 10.6593346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 17:53:15.230433   11452 api_server.go:52] waiting for apiserver process to appear ...
	I0229 17:53:15.242761   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 17:53:15.267563   11452 command_runner.go:130] > 5864
	I0229 17:53:15.267563   11452 api_server.go:72] duration metric: took 11.0581094s to wait for apiserver process to appear ...
	I0229 17:53:15.267563   11452 api_server.go:88] waiting for apiserver healthz status ...
	I0229 17:53:15.267563   11452 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57189/healthz ...
	I0229 17:53:15.312090   11452 api_server.go:279] https://127.0.0.1:57189/healthz returned 200:
	ok
	I0229 17:53:15.312795   11452 round_trippers.go:463] GET https://127.0.0.1:57189/version
	I0229 17:53:15.312832   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.312872   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.312872   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.315343   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 17:53:15.315343   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.315343   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.315343   11452 round_trippers.go:580]     Audit-Id: d3d4d889-caa0-4752-93fc-cc5b1d83f108
	I0229 17:53:15.315343   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.315343   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.315343   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.315343   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.315343   11452 round_trippers.go:580]     Content-Length: 264
	I0229 17:53:15.315343   11452 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 17:53:15.316873   11452 api_server.go:141] control plane version: v1.28.4
	I0229 17:53:15.316955   11452 api_server.go:131] duration metric: took 49.3093ms to wait for apiserver health ...
	I0229 17:53:15.317051   11452 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 17:53:15.347821   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods
	I0229 17:53:15.347821   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.347821   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.347821   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.356155   11452 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 17:53:15.356198   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.356198   11452 round_trippers.go:580]     Audit-Id: 3edffbf5-ec7e-4e26-bd4a-42bcf046b3c0
	I0229 17:53:15.356198   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.356198   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.356285   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.356285   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.356285   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.357320   11452 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"538"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"530","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49466 chars]
	I0229 17:53:15.360336   11452 system_pods.go:59] 7 kube-system pods found
	I0229 17:53:15.360400   11452 system_pods.go:61] "coredns-5dd5756b68-ntlsp" [719cbf17-fd3a-4d95-8ca8-ce844e40cd09] Running
	I0229 17:53:15.360400   11452 system_pods.go:61] "etcd-functional-686300" [cec45e60-81f4-4664-af69-afc8eb5ebb1c] Running
	I0229 17:53:15.360477   11452 system_pods.go:61] "kube-apiserver-functional-686300" [badf4941-60df-4d0f-a1b2-6ce4d4c07903] Running
	I0229 17:53:15.360477   11452 system_pods.go:61] "kube-controller-manager-functional-686300" [db554828-2110-4b08-aa00-3d21b8358f00] Running
	I0229 17:53:15.360516   11452 system_pods.go:61] "kube-proxy-qhmp2" [5ce4d53d-2d91-44f7-acd5-d5dc6345b62a] Running
	I0229 17:53:15.360516   11452 system_pods.go:61] "kube-scheduler-functional-686300" [a4ac9ed9-a91e-4b2f-92ab-929b0ea21b39] Running
	I0229 17:53:15.360516   11452 system_pods.go:61] "storage-provisioner" [e194a7c7-6279-4253-9460-70ca0f741a14] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 17:53:15.360516   11452 system_pods.go:74] duration metric: took 43.4648ms to wait for pod list to return data ...
	I0229 17:53:15.360641   11452 default_sa.go:34] waiting for default service account to be created ...
	I0229 17:53:15.539707   11452 request.go:629] Waited for 178.8546ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57189/api/v1/namespaces/default/serviceaccounts
	I0229 17:53:15.540074   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/default/serviceaccounts
	I0229 17:53:15.540074   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.540074   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.540074   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.610891   11452 round_trippers.go:574] Response Status: 200 OK in 70 milliseconds
	I0229 17:53:15.610891   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.610891   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.610891   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.610891   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.610891   11452 round_trippers.go:580]     Content-Length: 261
	I0229 17:53:15.610891   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.610891   11452 round_trippers.go:580]     Audit-Id: d929d39e-6c4e-4321-8006-5e6f4b01ed0a
	I0229 17:53:15.610891   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.610891   11452 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"539"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"798e0f58-119e-459c-9f7a-2bdf885e17ca","resourceVersion":"348","creationTimestamp":"2024-02-29T17:52:11Z"}}]}
	I0229 17:53:15.611671   11452 default_sa.go:45] found service account: "default"
	I0229 17:53:15.611740   11452 default_sa.go:55] duration metric: took 251.0279ms for default service account to be created ...
	I0229 17:53:15.611740   11452 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 17:53:15.748116   11452 request.go:629] Waited for 136.2499ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods
	I0229 17:53:15.748369   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/namespaces/kube-system/pods
	I0229 17:53:15.748369   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.748435   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.748435   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.755601   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 17:53:15.755601   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.755601   11452 round_trippers.go:580]     Audit-Id: 2225bbaf-7c46-4641-bc8b-08ff81c6808d
	I0229 17:53:15.755601   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.755601   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.755601   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.755709   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.755709   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.756401   11452 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"543"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ntlsp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"719cbf17-fd3a-4d95-8ca8-ce844e40cd09","resourceVersion":"530","creationTimestamp":"2024-02-29T17:52:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:52:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dd9fad73-7ab4-439a-a6b0-de5c95e9ba2d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49466 chars]
	I0229 17:53:15.759140   11452 system_pods.go:86] 7 kube-system pods found
	I0229 17:53:15.759140   11452 system_pods.go:89] "coredns-5dd5756b68-ntlsp" [719cbf17-fd3a-4d95-8ca8-ce844e40cd09] Running
	I0229 17:53:15.759140   11452 system_pods.go:89] "etcd-functional-686300" [cec45e60-81f4-4664-af69-afc8eb5ebb1c] Running
	I0229 17:53:15.759140   11452 system_pods.go:89] "kube-apiserver-functional-686300" [badf4941-60df-4d0f-a1b2-6ce4d4c07903] Running
	I0229 17:53:15.759140   11452 system_pods.go:89] "kube-controller-manager-functional-686300" [db554828-2110-4b08-aa00-3d21b8358f00] Running
	I0229 17:53:15.759140   11452 system_pods.go:89] "kube-proxy-qhmp2" [5ce4d53d-2d91-44f7-acd5-d5dc6345b62a] Running
	I0229 17:53:15.759140   11452 system_pods.go:89] "kube-scheduler-functional-686300" [a4ac9ed9-a91e-4b2f-92ab-929b0ea21b39] Running
	I0229 17:53:15.759140   11452 system_pods.go:89] "storage-provisioner" [e194a7c7-6279-4253-9460-70ca0f741a14] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 17:53:15.759140   11452 system_pods.go:126] duration metric: took 147.3982ms to wait for k8s-apps to be running ...
	I0229 17:53:15.759270   11452 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 17:53:15.767529   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 17:53:15.793035   11452 system_svc.go:56] duration metric: took 33.7644ms WaitForService to wait for kubelet.
	I0229 17:53:15.793035   11452 kubeadm.go:581] duration metric: took 11.5835772s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 17:53:15.793035   11452 node_conditions.go:102] verifying NodePressure condition ...
	I0229 17:53:15.940260   11452 request.go:629] Waited for 147.2243ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57189/api/v1/nodes
	I0229 17:53:15.940260   11452 round_trippers.go:463] GET https://127.0.0.1:57189/api/v1/nodes
	I0229 17:53:15.940260   11452 round_trippers.go:469] Request Headers:
	I0229 17:53:15.940260   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 17:53:15.940260   11452 round_trippers.go:473]     Accept: application/json, */*
	I0229 17:53:15.946433   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 17:53:15.946433   11452 round_trippers.go:577] Response Headers:
	I0229 17:53:15.946433   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 17:53:15.946433   11452 round_trippers.go:580]     Content-Type: application/json
	I0229 17:53:15.946433   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b562eb78-5298-440d-891b-937129f1b338
	I0229 17:53:15.946510   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 91311b37-ce6f-4662-afbe-509f6cc3f31d
	I0229 17:53:15.946510   11452 round_trippers.go:580]     Date: Thu, 29 Feb 2024 17:53:15 GMT
	I0229 17:53:15.946510   11452 round_trippers.go:580]     Audit-Id: fb170e9b-04d2-4197-86c3-cca52760096f
	I0229 17:53:15.946662   11452 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"543"},"items":[{"metadata":{"name":"functional-686300","uid":"f93d3b9b-5195-4da6-8f12-c08491e0acc5","resourceVersion":"428","creationTimestamp":"2024-02-29T17:51:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-686300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-686300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_51_59_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4907 chars]
	I0229 17:53:15.947046   11452 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0229 17:53:15.947046   11452 node_conditions.go:123] node cpu capacity is 16
	I0229 17:53:15.947046   11452 node_conditions.go:105] duration metric: took 154.01ms to run NodePressure ...
	I0229 17:53:15.947046   11452 start.go:228] waiting for startup goroutines ...
	I0229 17:53:15.947046   11452 start.go:233] waiting for cluster config update ...
	I0229 17:53:15.947046   11452 start.go:242] writing updated cluster config ...
	I0229 17:53:15.960122   11452 ssh_runner.go:195] Run: rm -f paused
	I0229 17:53:16.095848   11452 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 17:53:16.105418   11452 out.go:177] * Done! kubectl is now configured to use "functional-686300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 29 17:52:53 functional-686300 cri-dockerd[4952]: time="2024-02-29T17:52:53Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 29 17:52:53 functional-686300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 29 17:52:53 functional-686300 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Feb 29 17:52:53 functional-686300 systemd[1]: cri-docker.service: Deactivated successfully.
	Feb 29 17:52:53 functional-686300 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Feb 29 17:52:53 functional-686300 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Feb 29 17:52:53 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:53Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Feb 29 17:52:53 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:53Z" level=info msg="Start docker client with request timeout 0s"
	Feb 29 17:52:53 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:53Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Feb 29 17:52:53 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:53Z" level=info msg="Loaded network plugin cni"
	Feb 29 17:52:53 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:53Z" level=info msg="Docker cri networking managed by network plugin cni"
	Feb 29 17:52:53 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:53Z" level=info msg="Docker Info: &{ID:8271946a-d43e-414d-9f5e-1d934e3d7b66 Containers:14 ContainersRunning:0 ContainersPaused:0 ContainersStopped:14 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-02-29T17:52:53.477671206Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:1 NEventsListener:0 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingS
ystem:Ubuntu 22.04.3 LTS (containerized) OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0004322a0 NCPU:16 MemTotal:33657516032 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:functional-686300 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin] Pro
ductLicense: DefaultAddressPools:[] Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support]}"
	Feb 29 17:52:53 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:53Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 29 17:52:53 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:53Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 29 17:52:53 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:53Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 29 17:52:53 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:53Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 29 17:52:53 functional-686300 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 29 17:52:57 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a751e05db5fa0b32cc39799ea8896c591d5b6e3a5321f100108aef3dbd509ab9/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 17:52:57 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef921ec895d84f95b04c489191f8865758400b58bb271053457c38ee73c92c52/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 17:52:57 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8393a60b927a0cc7f1d1025dd1325bb78aca5054b9f6d8d3db2922e6534904be/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 17:52:57 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9477d85e852194e77d4ad4a711a7dbad169316e3702d6900c602a738c84f62a3/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 17:52:57 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/66c43e8f43fd2e30babb3076f93d168ba5cdc381f14cc5ff5d52012719941213/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 17:52:58 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8d765d1ac1d13f240b08054399d2575a3575cd969d6528fb141556e44f20679/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 17:52:58 functional-686300 cri-dockerd[5039]: time="2024-02-29T17:52:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/352d590e6aadec6c6b9d8ebfbcf8f1f0ae4234c0020453862fa05662a94350b5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 17:52:58 functional-686300 dockerd[4732]: time="2024-02-29T17:52:58.403359882Z" level=info msg="ignoring event" container=f5c7ffe6f174d69c3a9a5fc4e3831613d09dde3b12f4219d3e8c66a76d80659f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	87c446ff9731b       6e38f40d628db       22 seconds ago       Running             storage-provisioner       2                   ef921ec895d84       storage-provisioner
	fd9c3af8cfdaa       d058aa5ab969c       39 seconds ago       Running             kube-controller-manager   1                   a8d765d1ac1d1       kube-controller-manager-functional-686300
	6df09055ac44f       e3db313c6dbc0       39 seconds ago       Running             kube-scheduler            1                   352d590e6aade       kube-scheduler-functional-686300
	d3cd5dcad37f8       ead0a4a53df89       39 seconds ago       Running             coredns                   1                   8393a60b927a0       coredns-5dd5756b68-ntlsp
	4f44b18736333       7fe0e6f37db33       40 seconds ago       Running             kube-apiserver            1                   66c43e8f43fd2       kube-apiserver-functional-686300
	412a3b356bade       73deb9a3f7025       40 seconds ago       Running             etcd                      1                   9477d85e85219       etcd-functional-686300
	f5c7ffe6f174d       6e38f40d628db       40 seconds ago       Exited              storage-provisioner       1                   ef921ec895d84       storage-provisioner
	62bc36e087aa9       83f6cc407eed8       40 seconds ago       Running             kube-proxy                1                   a751e05db5fa0       kube-proxy-qhmp2
	576d7848c2107       83f6cc407eed8       About a minute ago   Exited              kube-proxy                0                   bd1133ab7e74e       kube-proxy-qhmp2
	b4c7003e90f4d       73deb9a3f7025       About a minute ago   Exited              etcd                      0                   22181f0b7346d       etcd-functional-686300
	77f6ecf52c966       7fe0e6f37db33       About a minute ago   Exited              kube-apiserver            0                   8775bce40b7a3       kube-apiserver-functional-686300
	
	
	==> coredns [d3cd5dcad37f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43173 - 9 "HINFO IN 7148176332134644412.2381937493650930720. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.085867839s
	
	
	==> describe nodes <==
	Name:               functional-686300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-686300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=functional-686300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T17_51_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 17:51:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-686300
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 17:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 17:53:33 +0000   Thu, 29 Feb 2024 17:51:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 17:53:33 +0000   Thu, 29 Feb 2024 17:51:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 17:53:33 +0000   Thu, 29 Feb 2024 17:51:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 17:53:33 +0000   Thu, 29 Feb 2024 17:52:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-686300
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868668Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868668Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4d6c2c676d9445fb2ad659ba965036d
	  System UUID:                c4d6c2c676d9445fb2ad659ba965036d
	  Boot ID:                    d6e19e81-4b60-457d-ba23-5f12408b314c
	  Kernel Version:             5.15.133.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.3
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-ntlsp                     100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     87s
	  kube-system                 etcd-functional-686300                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         99s
	  kube-system                 kube-apiserver-functional-686300             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-functional-686300    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-qhmp2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-functional-686300             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 83s                  kube-proxy       
	  Normal  Starting                 34s                  kube-proxy       
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s (x8 over 109s)  kubelet          Node functional-686300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x8 over 109s)  kubelet          Node functional-686300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x7 over 109s)  kubelet          Node functional-686300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s                  kubelet          Node functional-686300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet          Node functional-686300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet          Node functional-686300 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             99s                  kubelet          Node functional-686300 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  99s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 99s                  kubelet          Starting kubelet.
	  Normal  NodeReady                89s                  kubelet          Node functional-686300 status is now: NodeReady
	  Normal  RegisteredNode           88s                  node-controller  Node functional-686300 event: Registered Node functional-686300 in Controller
	  Normal  RegisteredNode           23s                  node-controller  Node functional-686300 event: Registered Node functional-686300 in Controller
	
	
	==> dmesg <==
	
	[  +0.002544] WSL (1) ERROR: ConfigMountFsTab:2579: Processing fstab with mount -a failed.
	[  +0.003664] WSL (1) ERROR: ConfigApplyWindowsLibPath:2527: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000003]  failed 2
	[  +0.005411] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001907] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.003929] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001693] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.004143] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.232928] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.154018] WSL (1) ERROR: ConfigApplyWindowsLibPath:2527: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000005]  failed 2
	[  +0.012140] FS-Cache: Duplicate cookie detected
	[  +0.010036] FS-Cache: O-cookie c=0000001d [p=00000002 fl=222 nc=0 na=1]
	[  +0.011106] FS-Cache: O-cookie d=0000000083415500{9P.session} n=00000000e7da8404
	[  +0.001707] FS-Cache: O-key=[10] '34323934393338323131'
	[  +0.000976] FS-Cache: N-cookie c=0000001e [p=00000002 fl=2 nc=0 na=1]
	[  +0.000944] FS-Cache: N-cookie d=0000000083415500{9P.session} n=00000000a73a2086
	[  +0.001395] FS-Cache: N-key=[10] '34323934393338323131'
	[  +0.014745] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.137576] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.629198] netlink: 'init': attribute type 4 has an invalid length.
	[  +0.629094] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [412a3b356bad] <==
	{"level":"info","ts":"2024-02-29T17:52:59.022729Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T17:52:59.022836Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-02-29T17:52:59.022932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-02-29T17:52:59.023166Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-02-29T17:52:59.025825Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T17:52:59.026054Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T17:52:59.026088Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T17:52:59.026241Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T17:52:59.026251Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T17:52:59.10329Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T17:52:59.103336Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T17:53:00.32833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T17:53:00.328438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T17:53:00.328458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-29T17:53:00.328471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T17:53:00.328478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-02-29T17:53:00.328526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-02-29T17:53:00.328646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-02-29T17:53:00.333618Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-686300 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T17:53:00.333727Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T17:53:00.334115Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T17:53:00.335701Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T17:53:00.336211Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-29T17:53:00.40347Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T17:53:00.4036Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [b4c7003e90f4] <==
	{"level":"info","ts":"2024-02-29T17:51:52.156346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T17:51:52.156761Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T17:51:52.156936Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T17:51:52.157729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-29T17:51:52.158789Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T17:51:52.160008Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T17:51:52.160184Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T17:51:52.160296Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T17:51:54.7104Z","caller":"traceutil/trace.go:171","msg":"trace[1617328028] transaction","detail":"{read_only:false; response_revision:2; number_of_response:1; }","duration":"106.053453ms","start":"2024-02-29T17:51:54.604326Z","end":"2024-02-29T17:51:54.71038Z","steps":["trace[1617328028] 'process raft request'  (duration: 105.815169ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T17:51:54.71793Z","caller":"traceutil/trace.go:171","msg":"trace[776199453] linearizableReadLoop","detail":"{readStateIndex:6; appliedIndex:4; }","duration":"108.268704ms","start":"2024-02-29T17:51:54.60964Z","end":"2024-02-29T17:51:54.717909Z","steps":["trace[776199453] 'read index received'  (duration: 100.672616ms)","trace[776199453] 'applied index is now lower than readState.Index'  (duration: 7.595388ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T17:51:54.718154Z","caller":"traceutil/trace.go:171","msg":"trace[493954961] transaction","detail":"{read_only:false; response_revision:3; number_of_response:1; }","duration":"108.570383ms","start":"2024-02-29T17:51:54.60957Z","end":"2024-02-29T17:51:54.718141Z","steps":["trace[493954961] 'process raft request'  (duration: 108.125713ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:51:54.718622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.821667ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-686300\" ","response":"range_response_count:1 size:2913"}
	{"level":"info","ts":"2024-02-29T17:51:54.718815Z","caller":"traceutil/trace.go:171","msg":"trace[776966502] range","detail":"{range_begin:/registry/minions/functional-686300; range_end:; response_count:1; response_revision:10; }","duration":"109.182043ms","start":"2024-02-29T17:51:54.609619Z","end":"2024-02-29T17:51:54.718801Z","steps":["trace[776966502] 'agreement among raft nodes before linearized reading'  (duration: 108.744872ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:51:54.718636Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.736873ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/functional-686300\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-02-29T17:51:54.719086Z","caller":"traceutil/trace.go:171","msg":"trace[995600949] range","detail":"{range_begin:/registry/csinodes/functional-686300; range_end:; response_count:0; response_revision:10; }","duration":"109.185043ms","start":"2024-02-29T17:51:54.609886Z","end":"2024-02-29T17:51:54.719071Z","steps":["trace[995600949] 'agreement among raft nodes before linearized reading'  (duration: 108.661978ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T17:52:40.813998Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-29T17:52:40.814071Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-686300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-02-29T17:52:40.814172Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T17:52:40.814255Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T17:52:40.907133Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T17:52:40.907215Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-29T17:52:40.907301Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-02-29T17:52:41.003396Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T17:52:41.003567Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-29T17:52:41.003589Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-686300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 17:53:38 up  1:58,  0 users,  load average: 1.72, 1.87, 1.39
	Linux functional-686300 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [4f44b1873633] <==
	I0229 17:53:02.769694       1 controller.go:116] Starting legacy_token_tracking_controller
	I0229 17:53:02.769802       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0229 17:53:02.770083       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0229 17:53:02.805145       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0229 17:53:02.805199       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0229 17:53:02.805258       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0229 17:53:02.903754       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 17:53:02.904307       1 aggregator.go:166] initial CRD sync complete...
	I0229 17:53:02.904324       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 17:53:02.904333       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 17:53:02.904344       1 cache.go:39] Caches are synced for autoregister controller
	I0229 17:53:02.904682       1 available_controller.go:423] Starting AvailableConditionController
	I0229 17:53:02.904707       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0229 17:53:02.905240       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 17:53:03.003755       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 17:53:03.004762       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 17:53:03.005056       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 17:53:03.005059       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 17:53:03.005100       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 17:53:03.008602       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 17:53:03.012108       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0229 17:53:03.110278       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0229 17:53:03.770983       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0229 17:53:15.613729       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0229 17:53:15.641745       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [77f6ecf52c96] <==
	W0229 17:52:50.060600       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.086362       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.101592       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.123328       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.140624       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.185214       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.198080       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.200734       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.225194       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.278845       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.279099       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.300712       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.301198       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.348912       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.395152       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.403346       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.473908       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.491939       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.516059       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.598686       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.614389       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.621867       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.674726       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.675187       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 17:52:50.688688       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [fd9c3af8cfda] <==
	I0229 17:53:15.603452       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0229 17:53:15.603420       1 shared_informer.go:318] Caches are synced for node
	I0229 17:53:15.603487       1 shared_informer.go:318] Caches are synced for persistent volume
	I0229 17:53:15.603550       1 range_allocator.go:174] "Sending events to api server"
	I0229 17:53:15.603457       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0229 17:53:15.603642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="156.602µs"
	I0229 17:53:15.603669       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0229 17:53:15.603678       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0229 17:53:15.603687       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0229 17:53:15.603724       1 shared_informer.go:318] Caches are synced for deployment
	I0229 17:53:15.619484       1 shared_informer.go:318] Caches are synced for taint
	I0229 17:53:15.619616       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0229 17:53:15.619714       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-686300"
	I0229 17:53:15.619780       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0229 17:53:15.619800       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0229 17:53:15.619828       1 taint_manager.go:210] "Sending events to api server"
	I0229 17:53:15.620019       1 event.go:307] "Event occurred" object="functional-686300" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-686300 event: Registered Node functional-686300 in Controller"
	I0229 17:53:15.643902       1 shared_informer.go:318] Caches are synced for stateful set
	I0229 17:53:15.644238       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 17:53:15.703302       1 shared_informer.go:318] Caches are synced for daemon sets
	I0229 17:53:15.703451       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 17:53:15.743733       1 shared_informer.go:318] Caches are synced for attach detach
	I0229 17:53:16.086757       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 17:53:16.086947       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 17:53:16.116248       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [576d7848c210] <==
	I0229 17:52:14.604128       1 server_others.go:69] "Using iptables proxy"
	I0229 17:52:14.704847       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0229 17:52:14.919431       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0229 17:52:14.927707       1 server_others.go:152] "Using iptables Proxier"
	I0229 17:52:14.927873       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0229 17:52:14.927988       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0229 17:52:14.928074       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 17:52:14.928950       1 server.go:846] "Version info" version="v1.28.4"
	I0229 17:52:14.928982       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 17:52:14.930041       1 config.go:188] "Starting service config controller"
	I0229 17:52:14.932784       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 17:52:14.930055       1 config.go:97] "Starting endpoint slice config controller"
	I0229 17:52:14.932826       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 17:52:14.930082       1 config.go:315] "Starting node config controller"
	I0229 17:52:14.932852       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 17:52:15.034203       1 shared_informer.go:318] Caches are synced for node config
	I0229 17:52:15.034237       1 shared_informer.go:318] Caches are synced for service config
	I0229 17:52:15.034256       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [62bc36e087aa] <==
	I0229 17:52:58.415225       1 server_others.go:69] "Using iptables proxy"
	E0229 17:52:58.418240       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-686300": dial tcp 192.168.49.2:8441: connect: connection refused
	I0229 17:53:03.115493       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0229 17:53:03.239793       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0229 17:53:03.243446       1 server_others.go:152] "Using iptables Proxier"
	I0229 17:53:03.243681       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0229 17:53:03.243694       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0229 17:53:03.243728       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 17:53:03.244187       1 server.go:846] "Version info" version="v1.28.4"
	I0229 17:53:03.244346       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 17:53:03.253900       1 config.go:188] "Starting service config controller"
	I0229 17:53:03.254146       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 17:53:03.254360       1 config.go:97] "Starting endpoint slice config controller"
	I0229 17:53:03.254371       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 17:53:03.254494       1 config.go:315] "Starting node config controller"
	I0229 17:53:03.254580       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 17:53:03.355091       1 shared_informer.go:318] Caches are synced for node config
	I0229 17:53:03.355147       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 17:53:03.355292       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [6df09055ac44] <==
	I0229 17:53:00.711099       1 serving.go:348] Generated self-signed cert in-memory
	W0229 17:53:02.904325       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 17:53:02.904475       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0229 17:53:02.904614       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 17:53:02.904721       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 17:53:03.014426       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 17:53:03.014552       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 17:53:03.017831       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 17:53:03.018116       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 17:53:03.020028       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 17:53:03.020158       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 17:53:03.118672       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 17:52:56 functional-686300 kubelet[2694]: I0229 17:52:56.439048    2694 status_manager.go:853] "Failed to get status for pod" podUID="5ce4d53d-2d91-44f7-acd5-d5dc6345b62a" pod="kube-system/kube-proxy-qhmp2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-qhmp2\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 17:52:56 functional-686300 kubelet[2694]: I0229 17:52:56.439565    2694 status_manager.go:853] "Failed to get status for pod" podUID="e194a7c7-6279-4253-9460-70ca0f741a14" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 17:52:56 functional-686300 kubelet[2694]: I0229 17:52:56.440053    2694 status_manager.go:853] "Failed to get status for pod" podUID="719cbf17-fd3a-4d95-8ca8-ce844e40cd09" pod="kube-system/coredns-5dd5756b68-ntlsp" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ntlsp\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 17:52:56 functional-686300 kubelet[2694]: I0229 17:52:56.440802    2694 status_manager.go:853] "Failed to get status for pod" podUID="67c8bf0ab7634b3488482d1ffc5ab864" pod="kube-system/etcd-functional-686300" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-686300\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 17:52:56 functional-686300 kubelet[2694]: I0229 17:52:56.441608    2694 status_manager.go:853] "Failed to get status for pod" podUID="ed92707ba3cc43df1828f1fa1c3c48a1" pod="kube-system/kube-apiserver-functional-686300" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-686300\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 17:52:56 functional-686300 kubelet[2694]: I0229 17:52:56.442165    2694 status_manager.go:853] "Failed to get status for pod" podUID="fb1ebaabe1c07fe1c7d63851d3665011" pod="kube-system/kube-scheduler-functional-686300" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-686300\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 17:52:56 functional-686300 kubelet[2694]: I0229 17:52:56.442805    2694 status_manager.go:853] "Failed to get status for pod" podUID="09c959bdf04c8a95cd733fa038e93752" pod="kube-system/kube-controller-manager-functional-686300" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-686300\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Feb 29 17:52:56 functional-686300 kubelet[2694]: E0229 17:52:56.637307    2694 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-686300.17b866ddc14b4be9", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-686300", UID:"ed92707ba3cc43df1828f1fa1c3c48a1", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get \"https://192.168.49.2:8441/readyz\": dial tcp 192.1
68.49.2:8441: connect: connection refused", Source:v1.EventSource{Component:"kubelet", Host:"functional-686300"}, FirstTimestamp:time.Date(2024, time.February, 29, 17, 52, 41, 203846121, time.Local), LastTimestamp:time.Date(2024, time.February, 29, 17, 52, 41, 203846121, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-686300"}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Feb 29 17:52:58 functional-686300 kubelet[2694]: I0229 17:52:58.015447    2694 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="352d590e6aadec6c6b9d8ebfbcf8f1f0ae4234c0020453862fa05662a94350b5"
	Feb 29 17:52:58 functional-686300 kubelet[2694]: I0229 17:52:58.115189    2694 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8d765d1ac1d13f240b08054399d2575a3575cd969d6528fb141556e44f20679"
	Feb 29 17:52:58 functional-686300 kubelet[2694]: I0229 17:52:58.130526    2694 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66c43e8f43fd2e30babb3076f93d168ba5cdc381f14cc5ff5d52012719941213"
	Feb 29 17:52:58 functional-686300 kubelet[2694]: I0229 17:52:58.222993    2694 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8393a60b927a0cc7f1d1025dd1325bb78aca5054b9f6d8d3db2922e6534904be"
	Feb 29 17:52:58 functional-686300 kubelet[2694]: I0229 17:52:58.309791    2694 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a751e05db5fa0b32cc39799ea8896c591d5b6e3a5321f100108aef3dbd509ab9"
	Feb 29 17:52:58 functional-686300 kubelet[2694]: I0229 17:52:58.622936    2694 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef921ec895d84f95b04c489191f8865758400b58bb271053457c38ee73c92c52"
	Feb 29 17:52:58 functional-686300 kubelet[2694]: I0229 17:52:58.810725    2694 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9477d85e852194e77d4ad4a711a7dbad169316e3702d6900c602a738c84f62a3"
	Feb 29 17:52:59 functional-686300 kubelet[2694]: I0229 17:52:59.114737    2694 scope.go:117] "RemoveContainer" containerID="00274b65ab9af629f415b786fcdaf53bc8fdfb2e54aadab448bc886756ed7dc2"
	Feb 29 17:52:59 functional-686300 kubelet[2694]: I0229 17:52:59.315863    2694 scope.go:117] "RemoveContainer" containerID="e90ad7358cb165d31f8d033a624e88415dc42b71c08d4847999ca6d347112ad2"
	Feb 29 17:52:59 functional-686300 kubelet[2694]: I0229 17:52:59.439924    2694 scope.go:117] "RemoveContainer" containerID="69df409cfa36e737002174dfb8dccbbcd51ac12b38ae7c391f5d0fb27718ea9c"
	Feb 29 17:52:59 functional-686300 kubelet[2694]: I0229 17:52:59.538555    2694 scope.go:117] "RemoveContainer" containerID="f7c322aa03f0cdab81502cc5795e3ed7311c59a47196f0ee8c7b5ccad36de021"
	Feb 29 17:53:00 functional-686300 kubelet[2694]: I0229 17:53:00.013860    2694 scope.go:117] "RemoveContainer" containerID="f5c7ffe6f174d69c3a9a5fc4e3831613d09dde3b12f4219d3e8c66a76d80659f"
	Feb 29 17:53:00 functional-686300 kubelet[2694]: E0229 17:53:00.014269    2694 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e194a7c7-6279-4253-9460-70ca0f741a14)\"" pod="kube-system/storage-provisioner" podUID="e194a7c7-6279-4253-9460-70ca0f741a14"
	Feb 29 17:53:01 functional-686300 kubelet[2694]: I0229 17:53:01.220550    2694 scope.go:117] "RemoveContainer" containerID="f5c7ffe6f174d69c3a9a5fc4e3831613d09dde3b12f4219d3e8c66a76d80659f"
	Feb 29 17:53:01 functional-686300 kubelet[2694]: E0229 17:53:01.220812    2694 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e194a7c7-6279-4253-9460-70ca0f741a14)\"" pod="kube-system/storage-provisioner" podUID="e194a7c7-6279-4253-9460-70ca0f741a14"
	Feb 29 17:53:02 functional-686300 kubelet[2694]: E0229 17:53:02.821350    2694 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 29 17:53:15 functional-686300 kubelet[2694]: I0229 17:53:15.121832    2694 scope.go:117] "RemoveContainer" containerID="f5c7ffe6f174d69c3a9a5fc4e3831613d09dde3b12f4219d3e8c66a76d80659f"
	
	
	==> storage-provisioner [87c446ff9731] <==
	I0229 17:53:15.636251       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 17:53:15.706844       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 17:53:15.707010       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 17:53:33.136888       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 17:53:33.137207       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ae91bb70-6c30-44e2-9057-8f9aa5108f2b", APIVersion:"v1", ResourceVersion:"548", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-686300_e41a0222-852b-47fb-b797-4d3963f391c6 became leader
	I0229 17:53:33.137608       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-686300_e41a0222-852b-47fb-b797-4d3963f391c6!
	I0229 17:53:33.239376       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-686300_e41a0222-852b-47fb-b797-4d3963f391c6!
	
	
	==> storage-provisioner [f5c7ffe6f174] <==
	I0229 17:52:58.318107       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0229 17:52:58.320410       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:53:36.533157    3196 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-686300 -n functional-686300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-686300 -n functional-686300: (1.4477098s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-686300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (6.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-686300 config unset cpus" to be -""- but got *"W0229 17:54:40.690539     772 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-686300 config get cpus: exit status 14 (342.5136ms)

                                                
                                                
** stderr ** 
	W0229 17:54:41.099086   11228 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-686300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0229 17:54:41.099086   11228 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-686300 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0229 17:54:41.419504   12036 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-686300 config get cpus" to be -""- but got *"W0229 17:54:41.752335    7288 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-686300 config unset cpus" to be -""- but got *"W0229 17:54:42.068864   15316 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 config get cpus
E0229 17:54:42.480617    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-686300 config get cpus: exit status 14 (354.1605ms)

                                                
                                                
** stderr ** 
	W0229 17:54:42.473375    8100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-686300 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0229 17:54:42.473375    8100 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube7\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (2.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (575.95s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-633500 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0229 18:04:41.109059    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:04:41.123745    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:04:41.139760    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:04:41.169954    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:04:41.217286    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:04:41.311341    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:04:41.481562    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:04:41.810776    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:04:42.463924    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:04:43.759183    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:04:46.330525    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:04:51.456762    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:05:01.703161    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:05:22.197405    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:06:03.164286    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:06:58.494076    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 18:07:25.093840    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:08:21.702338    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 18:09:41.116269    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:10:08.948526    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-633500 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: exit status 109 (9m35.5891484s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-633500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-633500 in cluster ingress-addon-legacy-633500
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 29 18:11:21 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:21.508898    5971 pod_workers.go:191] Error syncing pod 003b0f8c06c4e64f37c803d613312348 ("etcd-ingress-addon-legacy-633500_kube-system(003b0f8c06c4e64f37c803d613312348)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	  Feb 29 18:11:25 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:25.515746    5971 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-633500_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	  Feb 29 18:11:28 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:28.513011    5971 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-633500_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:02:16.247125    4160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 18:02:16.315936    4160 out.go:291] Setting OutFile to fd 1272 ...
	I0229 18:02:16.316383    4160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:02:16.316383    4160 out.go:304] Setting ErrFile to fd 1348...
	I0229 18:02:16.316383    4160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:02:16.335118    4160 out.go:298] Setting JSON to false
	I0229 18:02:16.339046    4160 start.go:129] hostinfo: {"hostname":"minikube7","uptime":7696,"bootTime":1709222039,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 18:02:16.339046    4160 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 18:02:16.344130    4160 out.go:177] * [ingress-addon-legacy-633500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 18:02:16.348278    4160 notify.go:220] Checking for updates...
	I0229 18:02:16.351022    4160 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 18:02:16.354826    4160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:02:16.357017    4160 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 18:02:16.360028    4160 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:02:16.362330    4160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:02:16.365794    4160 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:02:16.620716    4160 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 18:02:16.631110    4160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 18:02:16.963971    4160 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:78 SystemTime:2024-02-29 18:02:16.921168996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 18:02:16.968533    4160 out.go:177] * Using the docker driver based on user configuration
	I0229 18:02:16.972740    4160 start.go:299] selected driver: docker
	I0229 18:02:16.972740    4160 start.go:903] validating driver "docker" against <nil>
	I0229 18:02:16.972740    4160 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:02:17.045196    4160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 18:02:17.384309    4160 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:78 SystemTime:2024-02-29 18:02:17.342653114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 18:02:17.385276    4160 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:02:17.386056    4160 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:02:17.389368    4160 out.go:177] * Using Docker Desktop driver with root privileges
	I0229 18:02:17.391294    4160 cni.go:84] Creating CNI manager for ""
	I0229 18:02:17.391294    4160 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:02:17.391294    4160 start_flags.go:323] config:
	{Name:ingress-addon-legacy-633500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-633500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0229 18:02:17.394646    4160 out.go:177] * Starting control plane node ingress-addon-legacy-633500 in cluster ingress-addon-legacy-633500
	I0229 18:02:17.398161    4160 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 18:02:17.402290    4160 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 18:02:17.405162    4160 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 18:02:17.405225    4160 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 18:02:17.448580    4160 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0229 18:02:17.448687    4160 cache.go:56] Caching tarball of preloaded images
	I0229 18:02:17.449184    4160 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 18:02:17.452131    4160 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0229 18:02:17.457563    4160 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 18:02:17.522202    4160 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0229 18:02:17.569170    4160 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 18:02:17.569170    4160 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 18:02:21.823915    4160 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 18:02:21.824952    4160 preload.go:256] verifying checksum of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 18:02:22.852361    4160 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0229 18:02:22.853563    4160 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\config.json ...
	I0229 18:02:22.854198    4160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\config.json: {Name:mkb7220762728f4b2e514db23fb86659a3c6f23c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:02:22.854930    4160 cache.go:194] Successfully downloaded all kic artifacts
	I0229 18:02:22.854930    4160 start.go:365] acquiring machines lock for ingress-addon-legacy-633500: {Name:mkb6f270ace558b18ef507d6471d4e20bdd46c87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:02:22.855940    4160 start.go:369] acquired machines lock for "ingress-addon-legacy-633500" in 0s
	I0229 18:02:22.855940    4160 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-633500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-633500 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:02:22.855940    4160 start.go:125] createHost starting for "" (driver="docker")
	I0229 18:02:23.038941    4160 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0229 18:02:23.039435    4160 start.go:159] libmachine.API.Create for "ingress-addon-legacy-633500" (driver="docker")
	I0229 18:02:23.039435    4160 client.go:168] LocalClient.Create starting
	I0229 18:02:23.040044    4160 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0229 18:02:23.040862    4160 main.go:141] libmachine: Decoding PEM data...
	I0229 18:02:23.040893    4160 main.go:141] libmachine: Parsing certificate...
	I0229 18:02:23.040893    4160 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0229 18:02:23.040893    4160 main.go:141] libmachine: Decoding PEM data...
	I0229 18:02:23.040893    4160 main.go:141] libmachine: Parsing certificate...
	I0229 18:02:23.054405    4160 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-633500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 18:02:23.233542    4160 cli_runner.go:211] docker network inspect ingress-addon-legacy-633500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 18:02:23.243499    4160 network_create.go:281] running [docker network inspect ingress-addon-legacy-633500] to gather additional debugging logs...
	I0229 18:02:23.243499    4160 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-633500
	W0229 18:02:23.419974    4160 cli_runner.go:211] docker network inspect ingress-addon-legacy-633500 returned with exit code 1
	I0229 18:02:23.419974    4160 network_create.go:284] error running [docker network inspect ingress-addon-legacy-633500]: docker network inspect ingress-addon-legacy-633500: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-633500 not found
	I0229 18:02:23.419974    4160 network_create.go:286] output of [docker network inspect ingress-addon-legacy-633500]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-633500 not found
	
	** /stderr **
	I0229 18:02:23.430416    4160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 18:02:23.628734    4160 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00255ddd0}
	I0229 18:02:23.628734    4160 network_create.go:124] attempt to create docker network ingress-addon-legacy-633500 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0229 18:02:23.637790    4160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-633500 ingress-addon-legacy-633500
	I0229 18:02:24.380139    4160 network_create.go:108] docker network ingress-addon-legacy-633500 192.168.49.0/24 created
	I0229 18:02:24.381143    4160 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-633500" container
	I0229 18:02:24.397454    4160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 18:02:24.577717    4160 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-633500 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-633500 --label created_by.minikube.sigs.k8s.io=true
	I0229 18:02:24.753058    4160 oci.go:103] Successfully created a docker volume ingress-addon-legacy-633500
	I0229 18:02:24.764253    4160 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-633500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-633500 --entrypoint /usr/bin/test -v ingress-addon-legacy-633500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 18:02:27.061152    4160 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-633500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-633500 --entrypoint /usr/bin/test -v ingress-addon-legacy-633500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (2.2968081s)
	I0229 18:02:27.061152    4160 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-633500
	I0229 18:02:27.061152    4160 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 18:02:27.061152    4160 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 18:02:27.070750    4160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-633500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 18:02:53.262219    4160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-633500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (26.1912723s)
	I0229 18:02:53.262219    4160 kic.go:203] duration metric: took 26.200870 seconds to extract preloaded images to volume
	I0229 18:02:53.271348    4160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 18:02:53.589502    4160 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:true NGoroutines:78 SystemTime:2024-02-29 18:02:53.550700929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 18:02:53.601943    4160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0229 18:02:53.942976    4160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-633500 --name ingress-addon-legacy-633500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-633500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-633500 --network ingress-addon-legacy-633500 --ip 192.168.49.2 --volume ingress-addon-legacy-633500:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0229 18:02:54.828354    4160 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-633500 --format={{.State.Running}}
	I0229 18:02:55.008895    4160 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-633500 --format={{.State.Status}}
	I0229 18:02:55.180945    4160 cli_runner.go:164] Run: docker exec ingress-addon-legacy-633500 stat /var/lib/dpkg/alternatives/iptables
	I0229 18:02:55.437241    4160 oci.go:144] the created container "ingress-addon-legacy-633500" has a running status.
	I0229 18:02:55.438212    4160 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-633500\id_rsa...
	I0229 18:02:55.647088    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-633500\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0229 18:02:55.656724    4160 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-633500\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0229 18:02:55.890772    4160 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-633500 --format={{.State.Status}}
	I0229 18:02:56.089787    4160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0229 18:02:56.089787    4160 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-633500 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0229 18:02:56.354668    4160 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-633500\id_rsa...
	I0229 18:02:58.628782    4160 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-633500 --format={{.State.Status}}
	I0229 18:02:58.790630    4160 machine.go:88] provisioning docker machine ...
	I0229 18:02:58.790756    4160 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-633500"
	I0229 18:02:58.799857    4160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:02:58.965134    4160 main.go:141] libmachine: Using SSH client type: native
	I0229 18:02:58.973791    4160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 57688 <nil> <nil>}
	I0229 18:02:58.973791    4160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-633500 && echo "ingress-addon-legacy-633500" | sudo tee /etc/hostname
	I0229 18:02:59.175961    4160 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-633500
	
	I0229 18:02:59.185464    4160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:02:59.367542    4160 main.go:141] libmachine: Using SSH client type: native
	I0229 18:02:59.367542    4160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 57688 <nil> <nil>}
	I0229 18:02:59.367542    4160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-633500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-633500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-633500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:02:59.539185    4160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:02:59.539185    4160 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0229 18:02:59.539185    4160 ubuntu.go:177] setting up certificates
	I0229 18:02:59.539389    4160 provision.go:83] configureAuth start
	I0229 18:02:59.549106    4160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-633500
	I0229 18:02:59.719414    4160 provision.go:138] copyHostCerts
	I0229 18:02:59.719537    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem
	I0229 18:02:59.719537    4160 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0229 18:02:59.719537    4160 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0229 18:02:59.720244    4160 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0229 18:02:59.721145    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem
	I0229 18:02:59.721145    4160 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0229 18:02:59.721145    4160 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0229 18:02:59.721886    4160 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 18:02:59.722941    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem
	I0229 18:02:59.723244    4160 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0229 18:02:59.723244    4160 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0229 18:02:59.723577    4160 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 18:02:59.724837    4160 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ingress-addon-legacy-633500 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-633500]
	I0229 18:03:00.305471    4160 provision.go:172] copyRemoteCerts
	I0229 18:03:00.317283    4160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:03:00.324432    4160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:03:00.480029    4160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57688 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-633500\id_rsa Username:docker}
	I0229 18:03:00.609931    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 18:03:00.610450    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:03:00.648499    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 18:03:00.649514    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 18:03:00.687367    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 18:03:00.687367    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:03:00.725607    4160 provision.go:86] duration metric: configureAuth took 1.1862096s
	I0229 18:03:00.725607    4160 ubuntu.go:193] setting minikube options for container-runtime
	I0229 18:03:00.726271    4160 config.go:182] Loaded profile config "ingress-addon-legacy-633500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 18:03:00.735999    4160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:03:00.888596    4160 main.go:141] libmachine: Using SSH client type: native
	I0229 18:03:00.889436    4160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 57688 <nil> <nil>}
	I0229 18:03:00.889436    4160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:03:01.067460    4160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0229 18:03:01.067460    4160 ubuntu.go:71] root file system type: overlay
	I0229 18:03:01.067460    4160 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:03:01.078605    4160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:03:01.263236    4160 main.go:141] libmachine: Using SSH client type: native
	I0229 18:03:01.263750    4160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 57688 <nil> <nil>}
	I0229 18:03:01.263896    4160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:03:01.455483    4160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:03:01.465767    4160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:03:01.620240    4160 main.go:141] libmachine: Using SSH client type: native
	I0229 18:03:01.621732    4160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 57688 <nil> <nil>}
	I0229 18:03:01.621732    4160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:03:02.844937    4160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-29 18:03:01.442462882 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0229 18:03:02.844937    4160 machine.go:91] provisioned docker machine in 4.0541912s
	I0229 18:03:02.844937    4160 client.go:171] LocalClient.Create took 39.805203s
	I0229 18:03:02.844937    4160 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-633500" took 39.805203s
	I0229 18:03:02.844937    4160 start.go:300] post-start starting for "ingress-addon-legacy-633500" (driver="docker")
	I0229 18:03:02.844937    4160 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:03:02.858442    4160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:03:02.865992    4160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:03:03.022696    4160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57688 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-633500\id_rsa Username:docker}
	I0229 18:03:03.168568    4160 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:03:03.181004    4160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 18:03:03.181124    4160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 18:03:03.181124    4160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 18:03:03.181124    4160 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 18:03:03.181124    4160 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0229 18:03:03.181393    4160 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0229 18:03:03.182199    4160 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem -> 56602.pem in /etc/ssl/certs
	I0229 18:03:03.182199    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem -> /etc/ssl/certs/56602.pem
	I0229 18:03:03.193318    4160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:03:03.211872    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /etc/ssl/certs/56602.pem (1708 bytes)
	I0229 18:03:03.252017    4160 start.go:303] post-start completed in 407.077ms
	I0229 18:03:03.263712    4160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-633500
	I0229 18:03:03.428127    4160 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\config.json ...
	I0229 18:03:03.440780    4160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 18:03:03.448894    4160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:03:03.598981    4160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57688 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-633500\id_rsa Username:docker}
	I0229 18:03:03.722233    4160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 18:03:03.736876    4160 start.go:128] duration metric: createHost completed in 40.8806292s
	I0229 18:03:03.736876    4160 start.go:83] releasing machines lock for "ingress-addon-legacy-633500", held for 40.8806292s
	I0229 18:03:03.746055    4160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-633500
	I0229 18:03:03.910146    4160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:03:03.920874    4160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:03:03.921614    4160 ssh_runner.go:195] Run: cat /version.json
	I0229 18:03:03.928644    4160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:03:04.095587    4160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57688 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-633500\id_rsa Username:docker}
	I0229 18:03:04.108546    4160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57688 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-633500\id_rsa Username:docker}
	I0229 18:03:04.372734    4160 ssh_runner.go:195] Run: systemctl --version
	I0229 18:03:04.398343    4160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 18:03:04.423464    4160 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0229 18:03:04.442893    4160 start.go:419] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0229 18:03:04.453556    4160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 18:03:04.495131    4160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 18:03:04.525590    4160 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:03:04.525590    4160 start.go:475] detecting cgroup driver to use...
	I0229 18:03:04.525590    4160 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 18:03:04.525590    4160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:03:04.565456    4160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0229 18:03:04.597898    4160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:03:04.618605    4160 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:03:04.629515    4160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:03:04.665175    4160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:03:04.696012    4160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:03:04.726830    4160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:03:04.761648    4160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:03:04.797605    4160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:03:04.825472    4160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:03:04.858937    4160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:03:04.888068    4160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:03:05.025022    4160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:03:05.187399    4160 start.go:475] detecting cgroup driver to use...
	I0229 18:03:05.187525    4160 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 18:03:05.202548    4160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:03:05.228170    4160 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0229 18:03:05.240049    4160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:03:05.264240    4160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:03:05.307456    4160 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:03:05.327450    4160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:03:05.349614    4160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:03:05.399166    4160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:03:05.571284    4160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:03:05.702868    4160 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:03:05.703412    4160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:03:05.750571    4160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:03:05.908000    4160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:03:06.452591    4160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:03:06.511541    4160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:03:06.561574    4160 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	I0229 18:03:06.570904    4160 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-633500 dig +short host.docker.internal
	I0229 18:03:06.831681    4160 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0229 18:03:06.843320    4160 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0229 18:03:06.857241    4160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:03:06.885511    4160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:03:07.037660    4160 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 18:03:07.047797    4160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:03:07.088585    4160 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0229 18:03:07.088585    4160 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0229 18:03:07.102496    4160 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:03:07.140729    4160 ssh_runner.go:195] Run: which lz4
	I0229 18:03:07.151703    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 18:03:07.162589    4160 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:03:07.176318    4160 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:03:07.176318    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0229 18:03:23.585245    4160 docker.go:649] Took 16.432715 seconds to copy over tarball
	I0229 18:03:23.596325    4160 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:03:27.059292    4160 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.4629413s)
	I0229 18:03:27.059347    4160 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:03:27.162089    4160 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:03:27.182605    4160 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0229 18:03:27.225092    4160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:03:27.377718    4160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:03:39.077532    4160 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.6997269s)
	I0229 18:03:39.085982    4160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:03:39.129664    4160 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0229 18:03:39.129835    4160 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0229 18:03:39.129835    4160 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:03:39.143994    4160 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:03:39.152990    4160 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 18:03:39.158768    4160 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:03:39.159386    4160 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 18:03:39.159386    4160 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0229 18:03:39.165334    4160 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 18:03:39.166301    4160 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 18:03:39.167022    4160 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0229 18:03:39.167022    4160 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0229 18:03:39.172547    4160 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 18:03:39.174781    4160 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 18:03:39.182543    4160 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 18:03:39.184006    4160 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0229 18:03:39.184737    4160 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0229 18:03:39.186509    4160 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0229 18:03:39.187622    4160 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	W0229 18:03:39.279558    4160 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:03:39.356612    4160 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:03:39.449170    4160 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:03:39.526578    4160 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:03:39.618382    4160 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:03:39.696951    4160 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.7 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:03:39.738359    4160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:03:39.748222    4160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0229 18:03:39.774055    4160 image.go:187] authn lookup for registry.k8s.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:03:39.794436    4160 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0229 18:03:39.794436    4160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.18.20 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.18.20
	I0229 18:03:39.795032    4160 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 18:03:39.803743    4160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0229 18:03:39.821740    4160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0229 18:03:39.822732    4160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0229 18:03:39.845743    4160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.18.20
	I0229 18:03:39.859740    4160 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0229 18:03:39.860752    4160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.18.20 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.18.20
	I0229 18:03:39.860752    4160 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 18:03:39.864739    4160 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0229 18:03:39.864739    4160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.18.20 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.18.20
	I0229 18:03:39.864739    4160 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	W0229 18:03:39.868730    4160 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:03:39.870730    4160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0229 18:03:39.871730    4160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0229 18:03:39.873731    4160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0229 18:03:39.910078    4160 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0229 18:03:39.910078    4160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0229 18:03:39.910078    4160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.18.20
	I0229 18:03:39.910078    4160 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0229 18:03:39.917478    4160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.18.20
	I0229 18:03:39.918508    4160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0229 18:03:39.951802    4160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0229 18:03:39.967804    4160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0229 18:03:39.968803    4160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0229 18:03:40.003693    4160 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0229 18:03:40.003742    4160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.7 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.7
	I0229 18:03:40.003742    4160 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0229 18:03:40.007873    4160 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0229 18:03:40.007873    4160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0229 18:03:40.007873    4160 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0229 18:03:40.014686    4160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0229 18:03:40.017936    4160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0229 18:03:40.052475    4160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.7
	I0229 18:03:40.055681    4160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0229 18:03:40.141950    4160 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 18:03:40.180795    4160 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0229 18:03:40.180795    4160 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.18.20 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.18.20
	I0229 18:03:40.181332    4160 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 18:03:40.189983    4160 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 18:03:40.228032    4160 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.18.20
	I0229 18:03:40.228191    4160 cache_images.go:92] LoadImages completed in 1.0983477s
	W0229 18:03:40.228191    4160 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.18.20: The system cannot find the path specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.18.20: The system cannot find the path specified.
	I0229 18:03:40.237660    4160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:03:40.330873    4160 cni.go:84] Creating CNI manager for ""
	I0229 18:03:40.331007    4160 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:03:40.331007    4160 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:03:40.331007    4160 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-633500 NodeName:ingress-addon-legacy-633500 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:03:40.331007    4160 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-633500"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:03:40.331647    4160 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-633500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-633500 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:03:40.343162    4160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0229 18:03:40.362400    4160 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:03:40.373394    4160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:03:40.393886    4160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0229 18:03:40.423919    4160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0229 18:03:40.452830    4160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0229 18:03:40.491968    4160 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0229 18:03:40.504052    4160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:03:40.522580    4160 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500 for IP: 192.168.49.2
	I0229 18:03:40.522694    4160 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:03:40.523248    4160 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0229 18:03:40.523653    4160 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0229 18:03:40.524331    4160 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\client.key
	I0229 18:03:40.524494    4160 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\client.crt with IP's: []
	I0229 18:03:40.639097    4160 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\client.crt ...
	I0229 18:03:40.639097    4160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\client.crt: {Name:mk77394087850b73af7db8cfa004937ad37d7da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:03:40.641100    4160 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\client.key ...
	I0229 18:03:40.641100    4160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\client.key: {Name:mk989aa0b678e9f85c8ceb8dd3eab6ce16ea5a27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:03:40.642717    4160 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.key.dd3b5fb2
	I0229 18:03:40.642717    4160 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:03:40.854246    4160 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.crt.dd3b5fb2 ...
	I0229 18:03:40.854246    4160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.crt.dd3b5fb2: {Name:mk74d5faf7d43acb0fe14a9d159df4523ebd83b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:03:40.856160    4160 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.key.dd3b5fb2 ...
	I0229 18:03:40.856160    4160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.key.dd3b5fb2: {Name:mk2b2a7ff1cad87b27d341f07ada6c76a1118394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:03:40.856454    4160 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.crt
	I0229 18:03:40.866551    4160 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.key.dd3b5fb2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.key
	I0229 18:03:40.867876    4160 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\proxy-client.key
	I0229 18:03:40.867876    4160 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\proxy-client.crt with IP's: []
	I0229 18:03:41.081542    4160 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\proxy-client.crt ...
	I0229 18:03:41.081542    4160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\proxy-client.crt: {Name:mka901ebcda12c6769d3d928689dce1d57d4e8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:03:41.082382    4160 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\proxy-client.key ...
	I0229 18:03:41.082382    4160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\proxy-client.key: {Name:mk08b1c025aa6bd3dee6d099dc210c2296518c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:03:41.083459    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 18:03:41.084472    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 18:03:41.084472    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 18:03:41.091908    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 18:03:41.092957    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 18:03:41.093125    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 18:03:41.093339    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 18:03:41.093494    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 18:03:41.093637    4160 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem (1338 bytes)
	W0229 18:03:41.093637    4160 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660_empty.pem, impossibly tiny 0 bytes
	I0229 18:03:41.094297    4160 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0229 18:03:41.094471    4160 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0229 18:03:41.094731    4160 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 18:03:41.094969    4160 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 18:03:41.095189    4160 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem (1708 bytes)
	I0229 18:03:41.095189    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem -> /usr/share/ca-certificates/56602.pem
	I0229 18:03:41.095840    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:03:41.095970    4160 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem -> /usr/share/ca-certificates/5660.pem
	I0229 18:03:41.096114    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:03:41.145764    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:03:41.185357    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:03:41.227084    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-633500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:03:41.267000    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:03:41.306225    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:03:41.345956    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:03:41.384902    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 18:03:41.420668    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /usr/share/ca-certificates/56602.pem (1708 bytes)
	I0229 18:03:41.458784    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:03:41.496041    4160 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem --> /usr/share/ca-certificates/5660.pem (1338 bytes)
	I0229 18:03:41.531722    4160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:03:41.580406    4160 ssh_runner.go:195] Run: openssl version
	I0229 18:03:41.606496    4160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56602.pem && ln -fs /usr/share/ca-certificates/56602.pem /etc/ssl/certs/56602.pem"
	I0229 18:03:41.638333    4160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56602.pem
	I0229 18:03:41.651025    4160 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:50 /usr/share/ca-certificates/56602.pem
	I0229 18:03:41.662870    4160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56602.pem
	I0229 18:03:41.690906    4160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56602.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:03:41.721287    4160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:03:41.751720    4160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:03:41.763487    4160 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:03:41.773536    4160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:03:41.800196    4160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:03:41.831267    4160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5660.pem && ln -fs /usr/share/ca-certificates/5660.pem /etc/ssl/certs/5660.pem"
	I0229 18:03:41.861892    4160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5660.pem
	I0229 18:03:41.874508    4160 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:50 /usr/share/ca-certificates/5660.pem
	I0229 18:03:41.884467    4160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5660.pem
	I0229 18:03:41.911598    4160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5660.pem /etc/ssl/certs/51391683.0"
	I0229 18:03:41.941277    4160 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:03:41.953164    4160 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:03:41.953164    4160 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-633500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-633500 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:03:41.962531    4160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:03:42.011802    4160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:03:42.043672    4160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:03:42.063899    4160 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 18:03:42.073904    4160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:03:42.094180    4160 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:03:42.094180    4160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 18:03:42.201617    4160 kubeadm.go:322] W0229 18:03:42.199444    1901 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 18:03:42.415997    4160 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:03:42.416297    4160 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0229 18:03:42.507455    4160 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0229 18:03:42.644872    4160 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:03:47.022274    4160 kubeadm.go:322] W0229 18:03:47.020341    1901 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 18:03:47.024265    4160 kubeadm.go:322] W0229 18:03:47.022394    1901 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 18:07:47.032079    4160 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:07:47.032416    4160 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:07:47.045057    4160 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 18:07:47.045196    4160 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:07:47.045196    4160 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:07:47.045196    4160 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:07:47.045912    4160 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:07:47.046237    4160 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:07:47.046567    4160 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:07:47.046759    4160 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 18:07:47.046918    4160 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:07:47.050330    4160 out.go:204]   - Generating certificates and keys ...
	I0229 18:07:47.050330    4160 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:07:47.050330    4160 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:07:47.050911    4160 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:07:47.051068    4160 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:07:47.051170    4160 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:07:47.051250    4160 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:07:47.051487    4160 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:07:47.051955    4160 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-633500 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0229 18:07:47.052137    4160 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:07:47.052391    4160 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-633500 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0229 18:07:47.052391    4160 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:07:47.052391    4160 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:07:47.052391    4160 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:07:47.052972    4160 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:07:47.053154    4160 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:07:47.053335    4160 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:07:47.053335    4160 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:07:47.053335    4160 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:07:47.053335    4160 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:07:47.058055    4160 out.go:204]   - Booting up control plane ...
	I0229 18:07:47.058238    4160 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:07:47.058399    4160 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:07:47.058532    4160 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:07:47.058856    4160 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:07:47.058966    4160 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:07:47.058966    4160 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:07:47.058966    4160 kubeadm.go:322] 
	I0229 18:07:47.058966    4160 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 18:07:47.059507    4160 kubeadm.go:322] 		timed out waiting for the condition
	I0229 18:07:47.059591    4160 kubeadm.go:322] 
	I0229 18:07:47.059782    4160 kubeadm.go:322] 	This error is likely caused by:
	I0229 18:07:47.059890    4160 kubeadm.go:322] 		- The kubelet is not running
	I0229 18:07:47.060422    4160 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:07:47.060422    4160 kubeadm.go:322] 
	I0229 18:07:47.060422    4160 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:07:47.060422    4160 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 18:07:47.060422    4160 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 18:07:47.060422    4160 kubeadm.go:322] 
	I0229 18:07:47.061012    4160 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:07:47.061326    4160 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 18:07:47.061369    4160 kubeadm.go:322] 
	I0229 18:07:47.061513    4160 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:07:47.061556    4160 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:07:47.061655    4160 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 18:07:47.061655    4160 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0229 18:07:47.061655    4160 kubeadm.go:322] 
	W0229 18:07:47.061655    4160 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-633500 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-633500 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:03:42.199444    1901 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:03:47.020341    1901 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:03:47.022394    1901 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-633500 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-633500 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:03:42.199444    1901 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:03:47.020341    1901 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:03:47.022394    1901 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:07:47.062363    4160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:07:48.603238    4160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.5408636s)
	I0229 18:07:48.614117    4160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:07:48.635936    4160 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 18:07:48.646791    4160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:07:48.666000    4160 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:07:48.666118    4160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 18:07:48.753995    4160 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 18:07:48.754163    4160 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:07:49.204060    4160 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:07:49.204161    4160 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:07:49.204161    4160 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:07:49.512889    4160 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:07:49.514534    4160 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:07:49.514534    4160 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 18:07:49.668726    4160 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:07:49.672563    4160 out.go:204]   - Generating certificates and keys ...
	I0229 18:07:49.672821    4160 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:07:49.673103    4160 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:07:49.673255    4160 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:07:49.673399    4160 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:07:49.673756    4160 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:07:49.674001    4160 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:07:49.674237    4160 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:07:49.675920    4160 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:07:49.676136    4160 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:07:49.676776    4160 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:07:49.676776    4160 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:07:49.676776    4160 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:07:50.171447    4160 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:07:50.387370    4160 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:07:50.849328    4160 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:07:50.949923    4160 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:07:50.950669    4160 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:07:50.953783    4160 out.go:204]   - Booting up control plane ...
	I0229 18:07:50.953783    4160 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:07:50.964777    4160 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:07:50.966697    4160 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:07:50.969071    4160 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:07:50.975478    4160 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:08:30.977542    4160 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:11:50.980451    4160 kubeadm.go:322] 
	I0229 18:11:50.980606    4160 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 18:11:50.980744    4160 kubeadm.go:322] 		timed out waiting for the condition
	I0229 18:11:50.980744    4160 kubeadm.go:322] 
	I0229 18:11:50.980744    4160 kubeadm.go:322] 	This error is likely caused by:
	I0229 18:11:50.980744    4160 kubeadm.go:322] 		- The kubelet is not running
	I0229 18:11:50.980744    4160 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:11:50.981353    4160 kubeadm.go:322] 
	I0229 18:11:50.981608    4160 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:11:50.981702    4160 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 18:11:50.981895    4160 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 18:11:50.981932    4160 kubeadm.go:322] 
	I0229 18:11:50.982232    4160 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:11:50.982468    4160 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 18:11:50.982637    4160 kubeadm.go:322] 
	I0229 18:11:50.982829    4160 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:11:50.982988    4160 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:11:50.983705    4160 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 18:11:50.983909    4160 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0229 18:11:50.984036    4160 kubeadm.go:322] 
	I0229 18:11:50.989718    4160 kubeadm.go:322] W0229 18:07:48.752187    5742 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 18:11:50.990016    4160 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:11:50.990538    4160 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0229 18:11:50.990714    4160 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0229 18:11:50.990714    4160 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:11:50.991254    4160 kubeadm.go:322] W0229 18:07:50.963294    5742 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 18:11:50.991450    4160 kubeadm.go:322] W0229 18:07:50.965067    5742 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 18:11:50.991450    4160 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:11:50.991450    4160 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:11:50.991450    4160 kubeadm.go:406] StartCluster complete in 8m9.0346356s
	I0229 18:11:51.001701    4160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:11:51.055103    4160 logs.go:276] 0 containers: []
	W0229 18:11:51.055103    4160 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:11:51.064462    4160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:11:51.104858    4160 logs.go:276] 0 containers: []
	W0229 18:11:51.104962    4160 logs.go:278] No container was found matching "etcd"
	I0229 18:11:51.115971    4160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:11:51.154855    4160 logs.go:276] 0 containers: []
	W0229 18:11:51.154855    4160 logs.go:278] No container was found matching "coredns"
	I0229 18:11:51.164098    4160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:11:51.200724    4160 logs.go:276] 0 containers: []
	W0229 18:11:51.200724    4160 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:11:51.211094    4160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:11:51.244923    4160 logs.go:276] 0 containers: []
	W0229 18:11:51.244923    4160 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:11:51.253486    4160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:11:51.292237    4160 logs.go:276] 0 containers: []
	W0229 18:11:51.292237    4160 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:11:51.300721    4160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:11:51.335415    4160 logs.go:276] 0 containers: []
	W0229 18:11:51.335510    4160 logs.go:278] No container was found matching "kindnet"
	I0229 18:11:51.335510    4160 logs.go:123] Gathering logs for kubelet ...
	I0229 18:11:51.335648    4160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:11:51.377127    4160 logs.go:138] Found kubelet problem: Feb 29 18:11:21 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:21.508898    5971 pod_workers.go:191] Error syncing pod 003b0f8c06c4e64f37c803d613312348 ("etcd-ingress-addon-legacy-633500_kube-system(003b0f8c06c4e64f37c803d613312348)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	W0229 18:11:51.383031    4160 logs.go:138] Found kubelet problem: Feb 29 18:11:25 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:25.515746    5971 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-633500_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	W0229 18:11:51.387115    4160 logs.go:138] Found kubelet problem: Feb 29 18:11:28 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:28.513011    5971 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-633500_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	W0229 18:11:51.388403    4160 logs.go:138] Found kubelet problem: Feb 29 18:11:30 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:30.512472    5971 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-633500_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	W0229 18:11:51.393553    4160 logs.go:138] Found kubelet problem: Feb 29 18:11:35 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:35.512625    5971 pod_workers.go:191] Error syncing pod 003b0f8c06c4e64f37c803d613312348 ("etcd-ingress-addon-legacy-633500_kube-system(003b0f8c06c4e64f37c803d613312348)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	W0229 18:11:51.398825    4160 logs.go:138] Found kubelet problem: Feb 29 18:11:40 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:40.513656    5971 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-633500_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	W0229 18:11:51.399817    4160 logs.go:138] Found kubelet problem: Feb 29 18:11:41 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:41.510826    5971 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-633500_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
	W0229 18:11:51.403350    4160 logs.go:138] Found kubelet problem: Feb 29 18:11:44 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:44.509590    5971 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-633500_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	W0229 18:11:51.407515    4160 logs.go:138] Found kubelet problem: Feb 29 18:11:48 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:48.511023    5971 pod_workers.go:191] Error syncing pod 003b0f8c06c4e64f37c803d613312348 ("etcd-ingress-addon-legacy-633500_kube-system(003b0f8c06c4e64f37c803d613312348)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	I0229 18:11:51.410193    4160 logs.go:123] Gathering logs for dmesg ...
	I0229 18:11:51.410193    4160 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:11:51.437752    4160 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:11:51.437752    4160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:11:51.547196    4160 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:11:51.547253    4160 logs.go:123] Gathering logs for Docker ...
	I0229 18:11:51.547253    4160 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:11:51.583048    4160 logs.go:123] Gathering logs for container status ...
	I0229 18:11:51.583048    4160 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 18:11:51.664404    4160 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:07:48.752187    5742 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:07:50.963294    5742 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:07:50.965067    5742 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:11:51.664943    4160 out.go:239] * 
	* 
	W0229 18:11:51.665179    4160 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:07:48.752187    5742 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:07:50.963294    5742 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:07:50.965067    5742 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:07:48.752187    5742 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:07:50.963294    5742 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:07:50.965067    5742 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:11:51.665375    4160 out.go:239] * 
	* 
	W0229 18:11:51.666956    4160 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:11:51.671095    4160 out.go:177] X Problems detected in kubelet:
	I0229 18:11:51.676373    4160 out.go:177]   Feb 29 18:11:21 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:21.508898    5971 pod_workers.go:191] Error syncing pod 003b0f8c06c4e64f37c803d613312348 ("etcd-ingress-addon-legacy-633500_kube-system(003b0f8c06c4e64f37c803d613312348)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
	I0229 18:11:51.680976    4160 out.go:177]   Feb 29 18:11:25 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:25.515746    5971 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-633500_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
	I0229 18:11:51.685122    4160 out.go:177]   Feb 29 18:11:28 ingress-addon-legacy-633500 kubelet[5971]: E0229 18:11:28.513011    5971 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-633500_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
	I0229 18:11:51.689994    4160 out.go:177] 
	W0229 18:11:51.691641    4160 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:07:48.752187    5742 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:07:50.963294    5742 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:07:50.965067    5742 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:07:48.752187    5742 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:07:50.963294    5742 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:07:50.965067    5742 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:11:51.692629    4160 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:11:51.692629    4160 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:11:51.695653    4160 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p ingress-addon-legacy-633500 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (575.95s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (25.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-633500 addons enable ingress --alsologtostderr -v=5
E0229 18:11:58.495694    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-633500 addons enable ingress --alsologtostderr -v=5: exit status 1 (24.0659441s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:11:52.239712   10628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 18:11:52.337129   10628 out.go:291] Setting OutFile to fd 1688 ...
	I0229 18:11:52.352477   10628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:11:52.352477   10628 out.go:304] Setting ErrFile to fd 1576...
	I0229 18:11:52.352477   10628 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:11:52.367510   10628 mustload.go:65] Loading cluster: ingress-addon-legacy-633500
	I0229 18:11:52.368205   10628 config.go:182] Loaded profile config "ingress-addon-legacy-633500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 18:11:52.368310   10628 addons.go:597] checking whether the cluster is paused
	I0229 18:11:52.368450   10628 config.go:182] Loaded profile config "ingress-addon-legacy-633500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 18:11:52.368450   10628 host.go:66] Checking if "ingress-addon-legacy-633500" exists ...
	I0229 18:11:52.383736   10628 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-633500 --format={{.State.Status}}
	I0229 18:11:52.551301   10628 ssh_runner.go:195] Run: systemctl --version
	I0229 18:11:52.557952   10628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:11:52.723959   10628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57688 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-633500\id_rsa Username:docker}
	I0229 18:11:52.858517   10628 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:11:52.901694   10628 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 18:11:52.905044   10628 config.go:182] Loaded profile config "ingress-addon-legacy-633500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 18:11:52.905044   10628 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-633500"
	I0229 18:11:52.905044   10628 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-633500"
	I0229 18:11:52.905044   10628 host.go:66] Checking if "ingress-addon-legacy-633500" exists ...
	I0229 18:11:52.924204   10628 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-633500 --format={{.State.Status}}
	I0229 18:11:53.084394   10628 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0229 18:11:53.087190   10628 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 18:11:53.090843   10628 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 18:11:53.093004   10628 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0229 18:11:53.096695   10628 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 18:11:53.096695   10628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0229 18:11:53.103472   10628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-633500
	I0229 18:11:53.267040   10628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57688 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\ingress-addon-legacy-633500\id_rsa Username:docker}
	I0229 18:11:53.419758   10628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:11:53.529758   10628 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:53.530754   10628 retry.go:31] will retry after 342.045381ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:53.888037   10628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:11:54.005152   10628 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:54.005221   10628 retry.go:31] will retry after 464.175966ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:54.491274   10628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:11:54.595798   10628 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:54.595798   10628 retry.go:31] will retry after 402.230845ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:55.016034   10628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:11:55.127494   10628 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:55.127494   10628 retry.go:31] will retry after 731.079091ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:55.875941   10628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:11:55.982188   10628 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:55.982188   10628 retry.go:31] will retry after 848.455642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:56.847334   10628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:11:56.943727   10628 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:56.943727   10628 retry.go:31] will retry after 2.420782211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:59.389824   10628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:11:59.505487   10628 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:11:59.505487   10628 retry.go:31] will retry after 2.061762535s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:12:01.586767   10628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:12:01.683497   10628 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:12:01.683497   10628 retry.go:31] will retry after 5.435452736s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:12:07.139723   10628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:12:07.236551   10628 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:12:07.236551   10628 retry.go:31] will retry after 4.85585957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:12:12.104477   10628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:12:12.200168   10628 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:12:12.200737   10628 retry.go:31] will retry after 7.143232154s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-633500
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-633500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dffde971aa5526be666f2949d73c9029411164f2f30674115835f2c3fbe4650",
	        "Created": "2024-02-29T18:02:54.102089924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45315,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T18:02:54.757842494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/7dffde971aa5526be666f2949d73c9029411164f2f30674115835f2c3fbe4650/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dffde971aa5526be666f2949d73c9029411164f2f30674115835f2c3fbe4650/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dffde971aa5526be666f2949d73c9029411164f2f30674115835f2c3fbe4650/hosts",
	        "LogPath": "/var/lib/docker/containers/7dffde971aa5526be666f2949d73c9029411164f2f30674115835f2c3fbe4650/7dffde971aa5526be666f2949d73c9029411164f2f30674115835f2c3fbe4650-json.log",
	        "Name": "/ingress-addon-legacy-633500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-633500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-633500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c6a8b5cc1b0b2ba3e93ea3da98b28b29a89cd106cd6026fbbf8fa72670331404-init/diff:/var/lib/docker/overlay2/93b520212bad25395214c0a2a80384ead8baa0a1e04ab69f20509c9ef347fcc7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c6a8b5cc1b0b2ba3e93ea3da98b28b29a89cd106cd6026fbbf8fa72670331404/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c6a8b5cc1b0b2ba3e93ea3da98b28b29a89cd106cd6026fbbf8fa72670331404/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c6a8b5cc1b0b2ba3e93ea3da98b28b29a89cd106cd6026fbbf8fa72670331404/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-633500",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-633500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-633500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-633500",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-633500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8eaded3494c71291d88d1bbc0581057b893f736cf8a5a3520074b51f24e45fde",
	            "SandboxKey": "/var/run/docker/netns/8eaded3494c7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57688"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57689"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57690"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57691"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57692"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-633500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7dffde971aa5",
	                        "ingress-addon-legacy-633500"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "c49a182449984b92cea2ba1f186ff611791d5810e858d36982dda3f0729be29c",
	                    "EndpointID": "924a4d616db0aaffebd7c0e580ccbdf0ba92303e72224bd245ac37a2dd449af8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-633500",
	                        "7dffde971aa5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-633500 -n ingress-addon-legacy-633500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-633500 -n ingress-addon-legacy-633500: exit status 6 (1.1833176s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:12:16.468831    5728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 18:12:17.468669    5728 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-633500" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-633500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (25.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (717.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-996700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-996700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 109 (10m2.8097087s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-996700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-996700 in cluster kubernetes-upgrade-996700
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 29 18:57:14 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:14.727836    6055 pod_workers.go:191] Error syncing pod ebfcedfcf37aa29aa0d98d550dfcab27 ("etcd-kubernetes-upgrade-996700_kube-system(ebfcedfcf37aa29aa0d98d550dfcab27)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 18:57:15 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:15.735914    6055 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-kubernetes-upgrade-996700_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 18:57:15 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:15.740181    6055 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-996700_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:47:32.439305    8296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 18:47:32.546478    8296 out.go:291] Setting OutFile to fd 1720 ...
	I0229 18:47:32.547654    8296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:32.547694    8296 out.go:304] Setting ErrFile to fd 648...
	I0229 18:47:32.547694    8296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:32.578246    8296 out.go:298] Setting JSON to false
	I0229 18:47:32.584766    8296 start.go:129] hostinfo: {"hostname":"minikube7","uptime":10412,"bootTime":1709222039,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 18:47:32.585134    8296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 18:47:32.589768    8296 out.go:177] * [kubernetes-upgrade-996700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 18:47:32.597034    8296 notify.go:220] Checking for updates...
	I0229 18:47:32.599866    8296 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 18:47:32.607729    8296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:47:32.611977    8296 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 18:47:32.619658    8296 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:47:32.625214    8296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:47:32.631689    8296 config.go:182] Loaded profile config "missing-upgrade-251000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0229 18:47:32.631799    8296 config.go:182] Loaded profile config "pause-465700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:47:32.632413    8296 config.go:182] Loaded profile config "running-upgrade-130400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0229 18:47:32.632413    8296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:47:33.012618    8296 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 18:47:33.028714    8296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 18:47:33.530782    8296 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:88 SystemTime:2024-02-29 18:47:33.480114909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 18:47:33.537821    8296 out.go:177] * Using the docker driver based on user configuration
	I0229 18:47:33.546531    8296 start.go:299] selected driver: docker
	I0229 18:47:33.546531    8296 start.go:903] validating driver "docker" against <nil>
	I0229 18:47:33.546531    8296 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:47:33.667945    8296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 18:47:34.193275    8296 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:90 SystemTime:2024-02-29 18:47:34.149509562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 18:47:34.193932    8296 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:47:34.196476    8296 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 18:47:34.199306    8296 out.go:177] * Using Docker Desktop driver with root privileges
	I0229 18:47:34.202978    8296 cni.go:84] Creating CNI manager for ""
	I0229 18:47:34.202978    8296 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:47:34.202978    8296 start_flags.go:323] config:
	{Name:kubernetes-upgrade-996700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-996700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:47:34.210863    8296 out.go:177] * Starting control plane node kubernetes-upgrade-996700 in cluster kubernetes-upgrade-996700
	I0229 18:47:34.215348    8296 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 18:47:34.222832    8296 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 18:47:34.226068    8296 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 18:47:34.226068    8296 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 18:47:34.226068    8296 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 18:47:34.226068    8296 cache.go:56] Caching tarball of preloaded images
	I0229 18:47:34.226731    8296 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:47:34.226731    8296 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 18:47:34.226731    8296 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\config.json ...
	I0229 18:47:34.227402    8296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\config.json: {Name:mkd16bc1aad528d6283e7d97f4b4baf3728b7756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:34.476928    8296 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 18:47:34.476928    8296 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 18:47:34.477495    8296 cache.go:194] Successfully downloaded all kic artifacts
	I0229 18:47:34.477657    8296 start.go:365] acquiring machines lock for kubernetes-upgrade-996700: {Name:mk89dcbbe5e4802e74ccae3022ddca2e6fcb62d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:34.477657    8296 start.go:369] acquired machines lock for "kubernetes-upgrade-996700" in 0s
	I0229 18:47:34.478205    8296 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-996700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-996700 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:47:34.478416    8296 start.go:125] createHost starting for "" (driver="docker")
	I0229 18:47:34.488929    8296 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0229 18:47:34.489363    8296 start.go:159] libmachine.API.Create for "kubernetes-upgrade-996700" (driver="docker")
	I0229 18:47:34.489363    8296 client.go:168] LocalClient.Create starting
	I0229 18:47:34.489993    8296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0229 18:47:34.490059    8296 main.go:141] libmachine: Decoding PEM data...
	I0229 18:47:34.490059    8296 main.go:141] libmachine: Parsing certificate...
	I0229 18:47:34.490059    8296 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0229 18:47:34.490059    8296 main.go:141] libmachine: Decoding PEM data...
	I0229 18:47:34.490604    8296 main.go:141] libmachine: Parsing certificate...
	I0229 18:47:34.498199    8296 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-996700 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 18:47:34.683902    8296 cli_runner.go:211] docker network inspect kubernetes-upgrade-996700 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 18:47:34.697258    8296 network_create.go:281] running [docker network inspect kubernetes-upgrade-996700] to gather additional debugging logs...
	I0229 18:47:34.697258    8296 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-996700
	W0229 18:47:34.920572    8296 cli_runner.go:211] docker network inspect kubernetes-upgrade-996700 returned with exit code 1
	I0229 18:47:34.920572    8296 network_create.go:284] error running [docker network inspect kubernetes-upgrade-996700]: docker network inspect kubernetes-upgrade-996700: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-996700 not found
	I0229 18:47:34.920572    8296 network_create.go:286] output of [docker network inspect kubernetes-upgrade-996700]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-996700 not found
	
	** /stderr **
	I0229 18:47:34.935974    8296 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 18:47:35.238135    8296 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 18:47:35.267031    8296 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023a45a0}
	I0229 18:47:35.267031    8296 network_create.go:124] attempt to create docker network kubernetes-upgrade-996700 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0229 18:47:35.285914    8296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 kubernetes-upgrade-996700
	W0229 18:47:35.557178    8296 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 kubernetes-upgrade-996700 returned with exit code 1
	W0229 18:47:35.557300    8296 network_create.go:149] failed to create docker network kubernetes-upgrade-996700 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 kubernetes-upgrade-996700: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0229 18:47:35.557356    8296 network_create.go:116] failed to create docker network kubernetes-upgrade-996700 192.168.58.0/24, will retry: subnet is taken
	I0229 18:47:35.607464    8296 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 18:47:35.633985    8296 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00234d2c0}
	I0229 18:47:35.633985    8296 network_create.go:124] attempt to create docker network kubernetes-upgrade-996700 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0229 18:47:35.654124    8296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 kubernetes-upgrade-996700
	W0229 18:47:35.905216    8296 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 kubernetes-upgrade-996700 returned with exit code 1
	W0229 18:47:35.905216    8296 network_create.go:149] failed to create docker network kubernetes-upgrade-996700 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 kubernetes-upgrade-996700: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0229 18:47:35.905216    8296 network_create.go:116] failed to create docker network kubernetes-upgrade-996700 192.168.67.0/24, will retry: subnet is taken
	I0229 18:47:35.951672    8296 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 18:47:35.989316    8296 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024787e0}
	I0229 18:47:35.989316    8296 network_create.go:124] attempt to create docker network kubernetes-upgrade-996700 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0229 18:47:36.004075    8296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 kubernetes-upgrade-996700
	W0229 18:47:36.246781    8296 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 kubernetes-upgrade-996700 returned with exit code 1
	W0229 18:47:36.246781    8296 network_create.go:149] failed to create docker network kubernetes-upgrade-996700 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 kubernetes-upgrade-996700: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0229 18:47:36.246781    8296 network_create.go:116] failed to create docker network kubernetes-upgrade-996700 192.168.76.0/24, will retry: subnet is taken
	I0229 18:47:36.296871    8296 network.go:210] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 18:47:36.329660    8296 network.go:207] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023a4c90}
	I0229 18:47:36.329851    8296 network_create.go:124] attempt to create docker network kubernetes-upgrade-996700 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0229 18:47:36.343210    8296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 kubernetes-upgrade-996700
	I0229 18:47:36.745768    8296 network_create.go:108] docker network kubernetes-upgrade-996700 192.168.85.0/24 created
	I0229 18:47:36.745768    8296 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-996700" container
	I0229 18:47:36.780522    8296 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 18:47:37.034777    8296 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-996700 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 --label created_by.minikube.sigs.k8s.io=true
	I0229 18:47:37.288049    8296 oci.go:103] Successfully created a docker volume kubernetes-upgrade-996700
	I0229 18:47:37.306051    8296 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-996700-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 --entrypoint /usr/bin/test -v kubernetes-upgrade-996700:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 18:47:40.170327    8296 cli_runner.go:217] Completed: docker run --rm --name kubernetes-upgrade-996700-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 --entrypoint /usr/bin/test -v kubernetes-upgrade-996700:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (2.8638297s)
	I0229 18:47:40.170327    8296 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-996700
	I0229 18:47:40.170327    8296 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 18:47:40.170327    8296 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 18:47:40.185814    8296 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-996700:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 18:48:03.751610    8296 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-996700:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (23.565607s)
	I0229 18:48:03.751739    8296 kic.go:203] duration metric: took 23.581224 seconds to extract preloaded images to volume
	I0229 18:48:03.764499    8296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 18:48:04.176005    8296 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:86 SystemTime:2024-02-29 18:48:04.124311451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 18:48:04.185978    8296 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0229 18:48:04.593727    8296 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-996700 --name kubernetes-upgrade-996700 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-996700 --network kubernetes-upgrade-996700 --ip 192.168.85.2 --volume kubernetes-upgrade-996700:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0229 18:48:06.115056    8296 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-996700 --name kubernetes-upgrade-996700 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-996700 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-996700 --network kubernetes-upgrade-996700 --ip 192.168.85.2 --volume kubernetes-upgrade-996700:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08: (1.5203454s)
	I0229 18:48:06.127113    8296 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-996700 --format={{.State.Running}}
	I0229 18:48:06.353671    8296 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-996700 --format={{.State.Status}}
	I0229 18:48:06.556404    8296 cli_runner.go:164] Run: docker exec kubernetes-upgrade-996700 stat /var/lib/dpkg/alternatives/iptables
	I0229 18:48:06.877933    8296 oci.go:144] the created container "kubernetes-upgrade-996700" has a running status.
	I0229 18:48:06.877933    8296 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa...
	I0229 18:48:07.105778    8296 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0229 18:48:07.361777    8296 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-996700 --format={{.State.Status}}
	I0229 18:48:07.592581    8296 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0229 18:48:07.592581    8296 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-996700 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0229 18:48:07.855609    8296 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa...
	I0229 18:48:10.648661    8296 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-996700 --format={{.State.Status}}
	I0229 18:48:10.837510    8296 machine.go:88] provisioning docker machine ...
	I0229 18:48:10.837510    8296 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-996700"
	I0229 18:48:10.848240    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:48:11.033643    8296 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:11.044579    8296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59396 <nil> <nil>}
	I0229 18:48:11.044579    8296 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-996700 && echo "kubernetes-upgrade-996700" | sudo tee /etc/hostname
	I0229 18:48:11.247380    8296 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-996700
	
	I0229 18:48:11.262236    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:48:11.463471    8296 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:11.464446    8296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59396 <nil> <nil>}
	I0229 18:48:11.464446    8296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-996700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-996700/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-996700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:48:11.649828    8296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:48:11.649828    8296 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0229 18:48:11.649828    8296 ubuntu.go:177] setting up certificates
	I0229 18:48:11.649828    8296 provision.go:83] configureAuth start
	I0229 18:48:11.668040    8296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-996700
	I0229 18:48:11.859545    8296 provision.go:138] copyHostCerts
	I0229 18:48:11.860550    8296 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0229 18:48:11.860550    8296 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0229 18:48:11.860550    8296 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 18:48:11.862529    8296 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0229 18:48:11.862529    8296 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0229 18:48:11.862529    8296 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0229 18:48:11.864537    8296 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0229 18:48:11.864537    8296 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0229 18:48:11.865541    8296 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 18:48:11.867549    8296 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-996700 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-996700]
	I0229 18:48:12.124406    8296 provision.go:172] copyRemoteCerts
	I0229 18:48:12.142983    8296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:48:12.155877    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:48:12.364221    8296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59396 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa Username:docker}
	I0229 18:48:12.487034    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:48:12.538424    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 18:48:12.586709    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:48:12.635860    8296 provision.go:86] duration metric: configureAuth took 986.0235ms
	I0229 18:48:12.635860    8296 ubuntu.go:193] setting minikube options for container-runtime
	I0229 18:48:12.636831    8296 config.go:182] Loaded profile config "kubernetes-upgrade-996700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 18:48:12.647598    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:48:12.846307    8296 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:12.847068    8296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59396 <nil> <nil>}
	I0229 18:48:12.847134    8296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:48:13.062192    8296 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0229 18:48:13.062192    8296 ubuntu.go:71] root file system type: overlay
	I0229 18:48:13.062192    8296 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:48:13.079385    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:48:13.266608    8296 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:13.267154    8296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59396 <nil> <nil>}
	I0229 18:48:13.267387    8296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:48:13.488081    8296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:48:13.499020    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:48:13.701743    8296 main.go:141] libmachine: Using SSH client type: native
	I0229 18:48:13.701782    8296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59396 <nil> <nil>}
	I0229 18:48:13.701782    8296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:48:34.610170    8296 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-29 18:48:13.472395721 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0229 18:48:34.610170    8296 machine.go:91] provisioned docker machine in 23.7724699s
	I0229 18:48:34.610170    8296 client.go:171] LocalClient.Create took 1m0.1203257s
	I0229 18:48:34.610170    8296 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-996700" took 1m0.1203257s
	I0229 18:48:34.610170    8296 start.go:300] post-start starting for "kubernetes-upgrade-996700" (driver="docker")
	I0229 18:48:34.610170    8296 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:48:34.624160    8296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:48:34.632150    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:48:34.795152    8296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59396 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa Username:docker}
	I0229 18:48:34.963307    8296 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:48:34.977299    8296 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 18:48:34.977299    8296 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 18:48:34.977299    8296 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 18:48:34.977299    8296 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 18:48:34.977299    8296 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0229 18:48:34.977299    8296 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0229 18:48:34.978322    8296 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem -> 56602.pem in /etc/ssl/certs
	I0229 18:48:34.990312    8296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:48:35.008308    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /etc/ssl/certs/56602.pem (1708 bytes)
	I0229 18:48:35.049315    8296 start.go:303] post-start completed in 439.1414ms
	I0229 18:48:35.062311    8296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-996700
	I0229 18:48:35.237115    8296 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\config.json ...
	I0229 18:48:35.252121    8296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 18:48:35.261127    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:48:35.460114    8296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59396 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa Username:docker}
	I0229 18:48:35.600789    8296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 18:48:35.615257    8296 start.go:128] duration metric: createHost completed in 1m1.1363518s
	I0229 18:48:35.615316    8296 start.go:83] releasing machines lock for "kubernetes-upgrade-996700", held for 1m1.1371703s
	I0229 18:48:35.627729    8296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-996700
	I0229 18:48:35.817794    8296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:48:35.830777    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:48:35.831783    8296 ssh_runner.go:195] Run: cat /version.json
	I0229 18:48:35.842778    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:48:36.017656    8296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59396 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa Username:docker}
	I0229 18:48:36.023597    8296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59396 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa Username:docker}
	I0229 18:48:36.473205    8296 ssh_runner.go:195] Run: systemctl --version
	I0229 18:48:36.506197    8296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 18:48:36.536202    8296 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0229 18:48:36.559210    8296 start.go:419] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0229 18:48:36.576202    8296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 18:48:36.635200    8296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 18:48:36.754174    8296 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:48:36.754174    8296 start.go:475] detecting cgroup driver to use...
	I0229 18:48:36.754174    8296 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 18:48:36.754721    8296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:48:36.862167    8296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 18:48:36.905117    8296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:48:36.931119    8296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:48:36.947111    8296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:48:36.984125    8296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:48:37.150922    8296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:48:37.218607    8296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:48:37.254240    8296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:48:37.290072    8296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:48:37.339086    8296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:48:37.381086    8296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:48:37.419071    8296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:48:37.589670    8296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:48:37.794691    8296 start.go:475] detecting cgroup driver to use...
	I0229 18:48:37.794691    8296 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 18:48:37.812796    8296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:48:37.843670    8296 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0229 18:48:37.865725    8296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:48:37.898685    8296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:48:37.966706    8296 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:48:37.995680    8296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:48:38.017727    8296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:48:38.078709    8296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:48:38.318926    8296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:48:38.579016    8296 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:48:38.579678    8296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:48:38.630499    8296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:48:38.821012    8296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:48:39.716960    8296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:48:39.796605    8296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:48:39.859293    8296 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0229 18:48:39.867294    8296 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-996700 dig +short host.docker.internal
	I0229 18:48:40.145656    8296 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0229 18:48:40.158088    8296 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0229 18:48:40.167350    8296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:48:40.198335    8296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:48:40.398340    8296 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 18:48:40.410342    8296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:48:40.468234    8296 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:48:40.468350    8296 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 18:48:40.484321    8296 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:48:40.522323    8296 ssh_runner.go:195] Run: which lz4
	I0229 18:48:40.547325    8296 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:48:40.562636    8296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:48:40.562954    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0229 18:48:54.733550    8296 docker.go:649] Took 14.198115 seconds to copy over tarball
	I0229 18:48:54.746537    8296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:48:58.233874    8296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.4873094s)
	I0229 18:48:58.233874    8296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:48:58.328141    8296 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:48:58.351880    8296 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0229 18:48:58.396867    8296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:48:58.540926    8296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:49:09.055125    8296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (10.5141148s)
	I0229 18:49:09.065129    8296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:49:09.109150    8296 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:49:09.109150    8296 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 18:49:09.109150    8296 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:49:09.126206    8296 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:49:09.133134    8296 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:49:09.138141    8296 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:49:09.140127    8296 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:49:09.141135    8296 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:49:09.141135    8296 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:49:09.142132    8296 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:49:09.146178    8296 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:49:09.147145    8296 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:49:09.150135    8296 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:49:09.156149    8296 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:49:09.156149    8296 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:49:09.160133    8296 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:49:09.160133    8296 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:49:09.163150    8296 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:49:09.165125    8296 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	W0229 18:49:09.275406    8296 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:49:09.370409    8296 image.go:187] authn lookup for registry.k8s.io/etcd:3.3.15-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:49:09.464519    8296 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:49:09.545407    8296 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W0229 18:49:09.559412    8296 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:49:09.670409    8296 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:49:09.765311    8296 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:49:09.813241    8296 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 18:49:09.817217    8296 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:49:09.840223    8296 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	W0229 18:49:09.857215    8296 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:49:09.875218    8296 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:49:09.875218    8296 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.3.15-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0229 18:49:09.875218    8296 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:49:09.878218    8296 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:49:09.878218    8296 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0229 18:49:09.878218    8296 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:49:09.890231    8296 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:49:09.891231    8296 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0229 18:49:09.892224    8296 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:49:09.892224    8296 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0229 18:49:09.892224    8296 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0229 18:49:09.903246    8296 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0229 18:49:09.944219    8296 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:49:09.950248    8296 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0229 18:49:09.954215    8296 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0229 18:49:09.958218    8296 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	W0229 18:49:09.966208    8296 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:49:09.974214    8296 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:49:09.987206    8296 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:49:09.987206    8296 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0229 18:49:09.987206    8296 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:49:09.998208    8296 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:49:10.063036    8296 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:49:10.063036    8296 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0229 18:49:10.063036    8296 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:49:10.069055    8296 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0229 18:49:10.073053    8296 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:49:10.114035    8296 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0229 18:49:10.143083    8296 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:49:10.187263    8296 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:49:10.187263    8296 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0229 18:49:10.187263    8296 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:49:10.200240    8296 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:49:10.244234    8296 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0229 18:49:10.257233    8296 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:49:10.310232    8296 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:49:10.310232    8296 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0229 18:49:10.310232    8296 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:49:10.324238    8296 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:49:10.370249    8296 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0229 18:49:10.371245    8296 cache_images.go:92] LoadImages completed in 1.2620846s
	W0229 18:49:10.371245    8296 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0: The system cannot find the file specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0: The system cannot find the file specified.
	I0229 18:49:10.380253    8296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:49:10.514243    8296 cni.go:84] Creating CNI manager for ""
	I0229 18:49:10.514243    8296 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:49:10.514243    8296 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:49:10.514243    8296 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-996700 NodeName:kubernetes-upgrade-996700 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:49:10.515240    8296 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-996700"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-996700
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:49:10.515240    8296 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-996700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-996700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:49:10.527237    8296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:49:10.544236    8296 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:49:10.557251    8296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:49:10.575615    8296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0229 18:49:10.604652    8296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:49:10.637780    8296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0229 18:49:10.867023    8296 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0229 18:49:10.879639    8296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:49:10.907228    8296 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700 for IP: 192.168.85.2
	I0229 18:49:10.907337    8296 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:10.907956    8296 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0229 18:49:10.908328    8296 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0229 18:49:10.909310    8296 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\client.key
	I0229 18:49:10.909310    8296 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\client.crt with IP's: []
	I0229 18:49:11.259832    8296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\client.crt ...
	I0229 18:49:11.259832    8296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\client.crt: {Name:mk49143e6d8a5e199929ee3a868724f9c08ab644 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:11.261768    8296 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\client.key ...
	I0229 18:49:11.261812    8296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\client.key: {Name:mk3cc3a6ed4b2f280babbe8852e8656916d9b11e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:11.263621    8296 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.key.43b9df8c
	I0229 18:49:11.263820    8296 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:49:11.669207    8296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.crt.43b9df8c ...
	I0229 18:49:11.669207    8296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.crt.43b9df8c: {Name:mk465a8e578e017953f8d799fc4d25b57499ca90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:11.671219    8296 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.key.43b9df8c ...
	I0229 18:49:11.671219    8296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.key.43b9df8c: {Name:mk782a427feabdd2662961641d4b596ee01240ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:11.672237    8296 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.crt.43b9df8c -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.crt
	I0229 18:49:11.684219    8296 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.key.43b9df8c -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.key
	I0229 18:49:11.685220    8296 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\proxy-client.key
	I0229 18:49:11.685220    8296 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\proxy-client.crt with IP's: []
	I0229 18:49:12.227308    8296 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\proxy-client.crt ...
	I0229 18:49:12.227308    8296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\proxy-client.crt: {Name:mk36b99385118f1b667d69e92ec2f3c2014b6dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:12.228301    8296 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\proxy-client.key ...
	I0229 18:49:12.228301    8296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\proxy-client.key: {Name:mk42d97def6027aef12a8d4fc48e9a2e1cff2459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:49:12.244326    8296 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem (1338 bytes)
	W0229 18:49:12.244326    8296 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660_empty.pem, impossibly tiny 0 bytes
	I0229 18:49:12.245320    8296 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0229 18:49:12.245320    8296 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0229 18:49:12.245320    8296 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 18:49:12.245320    8296 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 18:49:12.245320    8296 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem (1708 bytes)
	I0229 18:49:12.245320    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:49:12.301310    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:49:12.368159    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:49:12.415865    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:49:12.458857    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:49:12.504778    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:49:12.558862    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:49:12.618884    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 18:49:12.667485    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /usr/share/ca-certificates/56602.pem (1708 bytes)
	I0229 18:49:12.711497    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:49:12.755713    8296 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem --> /usr/share/ca-certificates/5660.pem (1338 bytes)
	I0229 18:49:12.795714    8296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:49:12.841680    8296 ssh_runner.go:195] Run: openssl version
	I0229 18:49:12.874689    8296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56602.pem && ln -fs /usr/share/ca-certificates/56602.pem /etc/ssl/certs/56602.pem"
	I0229 18:49:12.918893    8296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56602.pem
	I0229 18:49:12.929905    8296 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:50 /usr/share/ca-certificates/56602.pem
	I0229 18:49:12.945905    8296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56602.pem
	I0229 18:49:12.980159    8296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56602.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:49:13.015294    8296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:49:13.047990    8296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:13.058983    8296 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:13.069984    8296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:49:13.101979    8296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:49:13.140700    8296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5660.pem && ln -fs /usr/share/ca-certificates/5660.pem /etc/ssl/certs/5660.pem"
	I0229 18:49:13.180810    8296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5660.pem
	I0229 18:49:13.191713    8296 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:50 /usr/share/ca-certificates/5660.pem
	I0229 18:49:13.210843    8296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5660.pem
	I0229 18:49:13.246944    8296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5660.pem /etc/ssl/certs/51391683.0"
	I0229 18:49:13.284838    8296 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:49:13.300806    8296 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:49:13.300806    8296 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-996700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-996700 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:49:13.314465    8296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:49:13.386763    8296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:49:13.417768    8296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:49:13.441565    8296 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 18:49:13.463316    8296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:49:13.486951    8296 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:49:13.486951    8296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 18:49:13.856456    8296 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:49:13.856793    8296 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0229 18:49:13.966598    8296 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0229 18:49:14.226030    8296 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:53:20.656966    8296 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:53:20.656966    8296 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:53:20.660963    8296 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:53:20.661979    8296 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:53:20.661979    8296 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:53:20.661979    8296 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:53:20.662952    8296 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:53:20.662952    8296 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:53:20.662952    8296 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:53:20.662952    8296 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:53:20.662952    8296 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:53:20.666956    8296 out.go:204]   - Generating certificates and keys ...
	I0229 18:53:20.666956    8296 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:53:20.666956    8296 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:53:20.666956    8296 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:53:20.666956    8296 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:53:20.668168    8296 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:53:20.668168    8296 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:53:20.668168    8296 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:53:20.668972    8296 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-996700 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0229 18:53:20.668972    8296 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:53:20.668972    8296 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-996700 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0229 18:53:20.668972    8296 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:53:20.669967    8296 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:53:20.669967    8296 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:53:20.669967    8296 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:53:20.669967    8296 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:53:20.669967    8296 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:53:20.669967    8296 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:53:20.669967    8296 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:53:20.670996    8296 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:53:20.672964    8296 out.go:204]   - Booting up control plane ...
	I0229 18:53:20.672964    8296 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:53:20.672964    8296 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:53:20.673958    8296 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:53:20.673958    8296 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:53:20.673958    8296 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:53:20.673958    8296 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:53:20.673958    8296 kubeadm.go:322] 
	I0229 18:53:20.674982    8296 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:53:20.674982    8296 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:53:20.674982    8296 kubeadm.go:322] 
	I0229 18:53:20.674982    8296 kubeadm.go:322] This error is likely caused by:
	I0229 18:53:20.674982    8296 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:53:20.674982    8296 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:53:20.674982    8296 kubeadm.go:322] 
	I0229 18:53:20.675953    8296 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:53:20.675953    8296 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:53:20.675953    8296 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:53:20.675953    8296 kubeadm.go:322] 
	I0229 18:53:20.675953    8296 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:53:20.677017    8296 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:53:20.677017    8296 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:53:20.677017    8296 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:53:20.677017    8296 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:53:20.677017    8296 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 18:53:20.677962    8296 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-996700 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-996700 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-996700 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-996700 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:53:20.677962    8296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:53:32.035339    8296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (11.3572843s)
	I0229 18:53:32.049522    8296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:53:32.075015    8296 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 18:53:32.087225    8296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:53:32.109389    8296 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:53:32.109389    8296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 18:53:32.432648    8296 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:53:32.433168    8296 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0229 18:53:32.522990    8296 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0229 18:53:32.689468    8296 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:57:34.291061    8296 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:57:34.291439    8296 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:57:34.298449    8296 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:57:34.298685    8296 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:57:34.298685    8296 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:57:34.298685    8296 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:57:34.298685    8296 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:57:34.299466    8296 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:57:34.299466    8296 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:57:34.300016    8296 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:57:34.300397    8296 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:57:34.305908    8296 out.go:204]   - Generating certificates and keys ...
	I0229 18:57:34.306616    8296 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:57:34.306776    8296 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:57:34.306964    8296 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:57:34.307132    8296 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:57:34.307179    8296 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:57:34.307179    8296 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:57:34.307179    8296 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:57:34.307860    8296 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:57:34.307911    8296 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:57:34.307911    8296 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:57:34.307911    8296 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:57:34.308611    8296 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:57:34.308611    8296 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:57:34.308611    8296 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:57:34.308611    8296 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:57:34.309150    8296 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:57:34.309276    8296 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:57:34.315589    8296 out.go:204]   - Booting up control plane ...
	I0229 18:57:34.315745    8296 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:57:34.315896    8296 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:57:34.315896    8296 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:57:34.315896    8296 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:57:34.316727    8296 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:57:34.316832    8296 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:57:34.316906    8296 kubeadm.go:322] 
	I0229 18:57:34.317062    8296 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:57:34.317128    8296 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:57:34.317128    8296 kubeadm.go:322] 
	I0229 18:57:34.317128    8296 kubeadm.go:322] This error is likely caused by:
	I0229 18:57:34.317128    8296 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:57:34.317128    8296 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:57:34.317128    8296 kubeadm.go:322] 
	I0229 18:57:34.317808    8296 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:57:34.317865    8296 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:57:34.317981    8296 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:57:34.317981    8296 kubeadm.go:322] 
	I0229 18:57:34.318304    8296 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:57:34.318406    8296 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:57:34.318406    8296 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:57:34.318406    8296 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:57:34.318406    8296 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:57:34.319065    8296 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:57:34.319223    8296 kubeadm.go:406] StartCluster complete in 8m21.0143676s
	I0229 18:57:34.327928    8296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:57:34.373926    8296 logs.go:276] 0 containers: []
	W0229 18:57:34.373926    8296 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:57:34.383761    8296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:57:34.418717    8296 logs.go:276] 0 containers: []
	W0229 18:57:34.418717    8296 logs.go:278] No container was found matching "etcd"
	I0229 18:57:34.427911    8296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:57:34.464660    8296 logs.go:276] 0 containers: []
	W0229 18:57:34.464660    8296 logs.go:278] No container was found matching "coredns"
	I0229 18:57:34.476320    8296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:57:34.514412    8296 logs.go:276] 0 containers: []
	W0229 18:57:34.514412    8296 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:57:34.524422    8296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:57:34.561225    8296 logs.go:276] 0 containers: []
	W0229 18:57:34.561807    8296 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:57:34.572256    8296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:57:34.608962    8296 logs.go:276] 0 containers: []
	W0229 18:57:34.609033    8296 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:57:34.617137    8296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:57:34.653231    8296 logs.go:276] 0 containers: []
	W0229 18:57:34.653231    8296 logs.go:278] No container was found matching "kindnet"
	I0229 18:57:34.653327    8296 logs.go:123] Gathering logs for kubelet ...
	I0229 18:57:34.653327    8296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:57:34.700781    8296 logs.go:138] Found kubelet problem: Feb 29 18:57:14 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:14.727836    6055 pod_workers.go:191] Error syncing pod ebfcedfcf37aa29aa0d98d550dfcab27 ("etcd-kubernetes-upgrade-996700_kube-system(ebfcedfcf37aa29aa0d98d550dfcab27)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 18:57:34.703745    8296 logs.go:138] Found kubelet problem: Feb 29 18:57:15 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:15.735914    6055 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-kubernetes-upgrade-996700_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 18:57:34.704757    8296 logs.go:138] Found kubelet problem: Feb 29 18:57:15 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:15.740181    6055 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-996700_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 18:57:34.713925    8296 logs.go:138] Found kubelet problem: Feb 29 18:57:19 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:19.734914    6055 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-996700_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 18:57:34.727223    8296 logs.go:138] Found kubelet problem: Feb 29 18:57:25 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:25.727017    6055 pod_workers.go:191] Error syncing pod ebfcedfcf37aa29aa0d98d550dfcab27 ("etcd-kubernetes-upgrade-996700_kube-system(ebfcedfcf37aa29aa0d98d550dfcab27)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 18:57:34.731588    8296 logs.go:138] Found kubelet problem: Feb 29 18:57:27 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:27.729576    6055 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-996700_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 18:57:34.734032    8296 logs.go:138] Found kubelet problem: Feb 29 18:57:28 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:28.732286    6055 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-kubernetes-upgrade-996700_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 18:57:34.741915    8296 logs.go:138] Found kubelet problem: Feb 29 18:57:32 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:32.733230    6055 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-kubernetes-upgrade-996700_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 18:57:34.746096    8296 logs.go:123] Gathering logs for dmesg ...
	I0229 18:57:34.746096    8296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:57:34.771289    8296 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:57:34.771289    8296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:57:34.884401    8296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:57:34.884496    8296 logs.go:123] Gathering logs for Docker ...
	I0229 18:57:34.884524    8296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:57:34.918436    8296 logs.go:123] Gathering logs for container status ...
	I0229 18:57:34.918436    8296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 18:57:34.999947    8296 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:57:34.999947    8296 out.go:239] * 
	* 
	W0229 18:57:35.000485    8296 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:57:35.000822    8296 out.go:239] * 
	* 
	W0229 18:57:35.002330    8296 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:57:35.004936    8296 out.go:177] X Problems detected in kubelet:
	I0229 18:57:35.009901    8296 out.go:177]   Feb 29 18:57:14 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:14.727836    6055 pod_workers.go:191] Error syncing pod ebfcedfcf37aa29aa0d98d550dfcab27 ("etcd-kubernetes-upgrade-996700_kube-system(ebfcedfcf37aa29aa0d98d550dfcab27)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 18:57:35.016973    8296 out.go:177]   Feb 29 18:57:15 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:15.735914    6055 pod_workers.go:191] Error syncing pod 6388f061f25c003b1a04f0626fa921d8 ("kube-apiserver-kubernetes-upgrade-996700_kube-system(6388f061f25c003b1a04f0626fa921d8)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0229 18:57:35.022908    8296 out.go:177]   Feb 29 18:57:15 kubernetes-upgrade-996700 kubelet[6055]: E0229 18:57:15.740181    6055 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-kubernetes-upgrade-996700_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 18:57:35.029921    8296 out.go:177] 
	W0229 18:57:35.033008    8296 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:57:35.033008    8296 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:57:35.033008    8296 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:57:35.039948    8296 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-996700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-996700
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-996700: (2.5568275s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-996700 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-996700 status --format={{.Host}}: exit status 7 (433.1863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:57:38.163115   10304 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-996700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker
E0229 18:58:21.748339    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-996700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker: (55.558753s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-996700 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-996700 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-996700 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker: exit status 106 (287.9424ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-996700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:58:34.327107    8432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-996700
	    minikube start -p kubernetes-upgrade-996700 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9967002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-996700 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-996700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-996700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker: (42.7410686s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-29 18:59:17.2403902 +0000 UTC m=+4864.695420801
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-996700
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-996700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "912cacb5f1d47c7e74c6e39fc3ec08cbbd93bad0ee1ab235923283add9cfa94c",
	        "Created": "2024-02-29T18:48:04.877657766Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 247463,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T18:57:40.852371686Z",
	            "FinishedAt": "2024-02-29T18:57:37.156038192Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/912cacb5f1d47c7e74c6e39fc3ec08cbbd93bad0ee1ab235923283add9cfa94c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/912cacb5f1d47c7e74c6e39fc3ec08cbbd93bad0ee1ab235923283add9cfa94c/hostname",
	        "HostsPath": "/var/lib/docker/containers/912cacb5f1d47c7e74c6e39fc3ec08cbbd93bad0ee1ab235923283add9cfa94c/hosts",
	        "LogPath": "/var/lib/docker/containers/912cacb5f1d47c7e74c6e39fc3ec08cbbd93bad0ee1ab235923283add9cfa94c/912cacb5f1d47c7e74c6e39fc3ec08cbbd93bad0ee1ab235923283add9cfa94c-json.log",
	        "Name": "/kubernetes-upgrade-996700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-996700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-996700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1bdb5020cfce6a5648fdaf1722a383ed02298aaf5af3fdc6c6830f1071c698e8-init/diff:/var/lib/docker/overlay2/93b520212bad25395214c0a2a80384ead8baa0a1e04ab69f20509c9ef347fcc7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1bdb5020cfce6a5648fdaf1722a383ed02298aaf5af3fdc6c6830f1071c698e8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1bdb5020cfce6a5648fdaf1722a383ed02298aaf5af3fdc6c6830f1071c698e8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1bdb5020cfce6a5648fdaf1722a383ed02298aaf5af3fdc6c6830f1071c698e8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-996700",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-996700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-996700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-996700",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-996700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9442eee171a9e27ad4058b0403ecfaa1a646475dc261611c8a1d8e69fdab5335",
	            "SandboxKey": "/var/run/docker/netns/9442eee171a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59923"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59924"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59920"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59921"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59922"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-996700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "912cacb5f1d4",
	                        "kubernetes-upgrade-996700"
	                    ],
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "fdf4183cdf174a6ebae8bda701f502a29b362ca134b1fdfac0840ba47e81d1a7",
	                    "EndpointID": "8465c3623862798f2c5124f236c665da403d0ef7c14abb034e178cbdae18ae2d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "kubernetes-upgrade-996700",
	                        "912cacb5f1d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-996700 -n kubernetes-upgrade-996700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-996700 -n kubernetes-upgrade-996700: (1.4425702s)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-996700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-996700 logs -n 25: (2.3524128s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p missing-upgrade-251000                              | missing-upgrade-251000    | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:50 UTC |
	| start   | -p cert-options-476400                                 | cert-options-476400       | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:50 UTC | 29 Feb 24 18:52 UTC |
	|         | --memory=2048                                          |                           |                   |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |                   |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |                   |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |                   |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |                   |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	|         | --apiserver-name=localhost                             |                           |                   |         |                     |                     |
	| ssh     | docker-flags-330600 ssh                                | docker-flags-330600       | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|         | sudo systemctl show docker                             |                           |                   |         |                     |                     |
	|         | --property=Environment                                 |                           |                   |         |                     |                     |
	|         | --no-pager                                             |                           |                   |         |                     |                     |
	| ssh     | docker-flags-330600 ssh                                | docker-flags-330600       | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	|         | sudo systemctl show docker                             |                           |                   |         |                     |                     |
	|         | --property=ExecStart                                   |                           |                   |         |                     |                     |
	|         | --no-pager                                             |                           |                   |         |                     |                     |
	| delete  | -p docker-flags-330600                                 | docker-flags-330600       | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:51 UTC | 29 Feb 24 18:51 UTC |
	| start   | -p old-k8s-version-718400                              | old-k8s-version-718400    | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:51 UTC |                     |
	|         | --memory=2200                                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |                   |         |                     |                     |
	|         | --kvm-network=default                                  |                           |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |                   |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |                   |         |                     |                     |
	|         | --keep-context=false                                   |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                           |                   |         |                     |                     |
	| ssh     | cert-options-476400 ssh                                | cert-options-476400       | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 18:52 UTC |
	|         | openssl x509 -text -noout -in                          |                           |                   |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |                   |         |                     |                     |
	| ssh     | -p cert-options-476400 -- sudo                         | cert-options-476400       | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 18:52 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |                   |         |                     |                     |
	| delete  | -p cert-options-476400                                 | cert-options-476400       | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 18:52 UTC |
	| start   | -p no-preload-500400                                   | no-preload-500400         | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:52 UTC | 29 Feb 24 18:54 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |                   |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |                   |         |                     |                     |
	| start   | -p cert-expiration-080100                              | cert-expiration-080100    | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | --memory=2048                                          |                           |                   |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-500400             | no-preload-500400         | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |                   |         |                     |                     |
	| stop    | -p no-preload-500400                                   | no-preload-500400         | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | --alsologtostderr -v=3                                 |                           |                   |         |                     |                     |
	| addons  | enable dashboard -p no-preload-500400                  | no-preload-500400         | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |                   |         |                     |                     |
	| start   | -p no-preload-500400                                   | no-preload-500400         | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                           |                   |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |                   |         |                     |                     |
	| delete  | -p cert-expiration-080100                              | cert-expiration-080100    | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	| start   | -p embed-certs-058300                                  | embed-certs-058300        | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:56 UTC |
	|         | --memory=2200                                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |                   |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                           |                   |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-058300            | embed-certs-058300        | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:56 UTC | 29 Feb 24 18:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |                   |         |                     |                     |
	| stop    | -p embed-certs-058300                                  | embed-certs-058300        | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:56 UTC | 29 Feb 24 18:56 UTC |
	|         | --alsologtostderr -v=3                                 |                           |                   |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-058300                 | embed-certs-058300        | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:56 UTC | 29 Feb 24 18:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |                   |         |                     |                     |
	| start   | -p embed-certs-058300                                  | embed-certs-058300        | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:56 UTC |                     |
	|         | --memory=2200                                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |                   |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-996700                           | kubernetes-upgrade-996700 | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:57 UTC | 29 Feb 24 18:57 UTC |
	| start   | -p kubernetes-upgrade-996700                           | kubernetes-upgrade-996700 | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:57 UTC | 29 Feb 24 18:58 UTC |
	|         | --memory=2200                                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-996700                           | kubernetes-upgrade-996700 | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:58 UTC |                     |
	|         | --memory=2200                                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-996700                           | kubernetes-upgrade-996700 | minikube7\jenkins | v1.32.0 | 29 Feb 24 18:58 UTC | 29 Feb 24 18:59 UTC |
	|         | --memory=2200                                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                           |                   |         |                     |                     |
	|         | --driver=docker                                        |                           |                   |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:58:34
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:58:34.678547    8948 out.go:291] Setting OutFile to fd 1580 ...
	I0229 18:58:34.678711    8948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:58:34.678711    8948 out.go:304] Setting ErrFile to fd 1696...
	I0229 18:58:34.678711    8948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:58:34.692029    8948 out.go:298] Setting JSON to false
	I0229 18:58:34.702114    8948 start.go:129] hostinfo: {"hostname":"minikube7","uptime":11074,"bootTime":1709222039,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 18:58:34.702114    8948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 18:58:34.704453    8948 out.go:177] * [kubernetes-upgrade-996700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 18:58:34.708845    8948 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 18:58:34.708845    8948 notify.go:220] Checking for updates...
	I0229 18:58:34.714761    8948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:58:34.717403    8948 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 18:58:34.719853    8948 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:58:34.722261    8948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:58:34.724802    8948 config.go:182] Loaded profile config "kubernetes-upgrade-996700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:58:34.725880    8948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:58:35.013915    8948 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 18:58:35.028456    8948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 18:58:35.420445    8948 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:true NGoroutines:98 SystemTime:2024-02-29 18:58:35.374965658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 18:58:35.426136    8948 out.go:177] * Using the docker driver based on existing profile
	I0229 18:58:35.429164    8948 start.go:299] selected driver: docker
	I0229 18:58:35.429164    8948 start.go:903] validating driver "docker" against &{Name:kubernetes-upgrade-996700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-996700 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:58:35.429164    8948 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:58:35.493362    8948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 18:58:35.880070    8948 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:true NGoroutines:98 SystemTime:2024-02-29 18:58:35.830350005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 18:58:35.880319    8948 cni.go:84] Creating CNI manager for ""
	I0229 18:58:35.880319    8948 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:58:35.880319    8948 start_flags.go:323] config:
	{Name:kubernetes-upgrade-996700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-996700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:58:35.883800    8948 out.go:177] * Starting control plane node kubernetes-upgrade-996700 in cluster kubernetes-upgrade-996700
	I0229 18:58:35.887042    8948 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 18:58:35.890302    8948 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 18:58:32.387779   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:34.879824   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:35.892889    8948 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 18:58:35.892953    8948 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 18:58:35.893126    8948 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 18:58:35.893126    8948 cache.go:56] Caching tarball of preloaded images
	I0229 18:58:35.893600    8948 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:58:35.893770    8948 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0229 18:58:35.893827    8948 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\config.json ...
	I0229 18:58:36.086956    8948 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 18:58:36.086956    8948 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 18:58:36.086956    8948 cache.go:194] Successfully downloaded all kic artifacts
	I0229 18:58:36.086956    8948 start.go:365] acquiring machines lock for kubernetes-upgrade-996700: {Name:mk89dcbbe5e4802e74ccae3022ddca2e6fcb62d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:58:36.086956    8948 start.go:369] acquired machines lock for "kubernetes-upgrade-996700" in 0s
	I0229 18:58:36.086956    8948 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:58:36.087502    8948 fix.go:54] fixHost starting: 
	I0229 18:58:36.105499    8948 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-996700 --format={{.State.Status}}
	I0229 18:58:36.280726    8948 fix.go:102] recreateIfNeeded on kubernetes-upgrade-996700: state=Running err=<nil>
	W0229 18:58:36.280840    8948 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:58:36.283789    8948 out.go:177] * Updating the running docker "kubernetes-upgrade-996700" container ...
	I0229 18:58:35.481199   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:37.491456   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:36.285765    8948 machine.go:88] provisioning docker machine ...
	I0229 18:58:36.286300    8948 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-996700"
	I0229 18:58:36.296462    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:58:36.475540    8948 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:36.476316    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59923 <nil> <nil>}
	I0229 18:58:36.476390    8948 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-996700 && echo "kubernetes-upgrade-996700" | sudo tee /etc/hostname
	I0229 18:58:36.670327    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-996700
	
	I0229 18:58:36.680058    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:58:36.860966    8948 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:36.860966    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59923 <nil> <nil>}
	I0229 18:58:36.860966    8948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-996700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-996700/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-996700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:58:37.055202    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:58:37.055202    8948 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0229 18:58:37.055202    8948 ubuntu.go:177] setting up certificates
	I0229 18:58:37.055202    8948 provision.go:83] configureAuth start
	I0229 18:58:37.075260    8948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-996700
	I0229 18:58:37.272365    8948 provision.go:138] copyHostCerts
	I0229 18:58:37.272365    8948 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0229 18:58:37.272365    8948 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0229 18:58:37.273141    8948 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 18:58:37.273765    8948 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0229 18:58:37.273765    8948 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0229 18:58:37.274429    8948 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0229 18:58:37.275137    8948 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0229 18:58:37.275137    8948 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0229 18:58:37.275836    8948 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 18:58:37.276481    8948 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-996700 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-996700]
	I0229 18:58:37.537425    8948 provision.go:172] copyRemoteCerts
	I0229 18:58:37.559490    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:58:37.570879    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:58:37.762374    8948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59923 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa Username:docker}
	I0229 18:58:37.889314    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:58:37.946156    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 18:58:38.003575    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:58:38.055795    8948 provision.go:86] duration metric: configureAuth took 1.0005851s
	I0229 18:58:38.055892    8948 ubuntu.go:193] setting minikube options for container-runtime
	I0229 18:58:38.056469    8948 config.go:182] Loaded profile config "kubernetes-upgrade-996700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:58:38.069062    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:58:38.287585    8948 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:38.288104    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59923 <nil> <nil>}
	I0229 18:58:38.288104    8948 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:58:38.461372    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0229 18:58:38.461489    8948 ubuntu.go:71] root file system type: overlay
	I0229 18:58:38.461755    8948 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:58:38.475365    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:58:38.661033    8948 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:38.662423    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59923 <nil> <nil>}
	I0229 18:58:38.662637    8948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:58:38.862096    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:58:38.872839    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:58:39.094127    8948 main.go:141] libmachine: Using SSH client type: native
	I0229 18:58:39.094522    8948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59923 <nil> <nil>}
	I0229 18:58:39.094522    8948 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:58:39.281183    8948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:58:39.281183    8948 machine.go:91] provisioned docker machine in 2.9948595s
	I0229 18:58:39.281183    8948 start.go:300] post-start starting for "kubernetes-upgrade-996700" (driver="docker")
	I0229 18:58:39.281183    8948 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:58:39.293316    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:58:39.304962    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:58:39.484655    8948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59923 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa Username:docker}
	I0229 18:58:39.632754    8948 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:58:39.646601    8948 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 18:58:39.646715    8948 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 18:58:39.646715    8948 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 18:58:39.646715    8948 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 18:58:39.646769    8948 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0229 18:58:39.647087    8948 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0229 18:58:39.648061    8948 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem -> 56602.pem in /etc/ssl/certs
	I0229 18:58:39.662524    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:58:37.373388   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:39.386030   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:39.497360   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:41.972926   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:39.683830    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /etc/ssl/certs/56602.pem (1708 bytes)
	I0229 18:58:39.731857    8948 start.go:303] post-start completed in 450.6697ms
	I0229 18:58:39.743698    8948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 18:58:39.746805    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:58:39.917839    8948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59923 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa Username:docker}
	I0229 18:58:40.074388    8948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 18:58:40.087493    8948 fix.go:56] fixHost completed within 3.9999586s
	I0229 18:58:40.087493    8948 start.go:83] releasing machines lock for "kubernetes-upgrade-996700", held for 4.0005045s
	I0229 18:58:40.102401    8948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-996700
	I0229 18:58:40.297069    8948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:58:40.307530    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:58:40.307530    8948 ssh_runner.go:195] Run: cat /version.json
	I0229 18:58:40.312402    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:58:40.501684    8948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59923 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa Username:docker}
	I0229 18:58:40.515812    8948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59923 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa Username:docker}
	I0229 18:58:40.794920    8948 ssh_runner.go:195] Run: systemctl --version
	I0229 18:58:40.819386    8948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:58:40.833811    8948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:58:40.848631    8948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 18:58:40.887970    8948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 18:58:40.912154    8948 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0229 18:58:40.912154    8948 start.go:475] detecting cgroup driver to use...
	I0229 18:58:40.912296    8948 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 18:58:40.912546    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:58:40.959335    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:58:40.996401    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:58:41.020216    8948 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:58:41.036801    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:58:41.076656    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:58:41.117432    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:58:41.158015    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:58:41.193481    8948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:58:41.233941    8948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:58:41.270461    8948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:58:41.304747    8948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:58:41.340596    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:58:41.516306    8948 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:58:41.880861   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:44.378228   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:44.518022   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:46.983283   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:46.870512   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:48.876527   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:49.476717   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:51.481095   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:52.026654    8948 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.510263s)
	I0229 18:58:52.026654    8948 start.go:475] detecting cgroup driver to use...
	I0229 18:58:52.026654    8948 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 18:58:52.045448    8948 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:58:52.073913    8948 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0229 18:58:52.087355    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:58:52.124342    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:58:52.171116    8948 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:58:52.196910    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:58:52.220512    8948 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:58:52.273440    8948 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:58:52.474812    8948 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:58:52.637074    8948 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:58:52.637430    8948 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:58:52.687743    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:58:52.861388    8948 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:58:53.551106    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 18:58:53.643221    8948 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0229 18:58:53.745348    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:58:53.777882    8948 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 18:58:53.940819    8948 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 18:58:54.099086    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:58:54.250502    8948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 18:58:54.289823    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:58:54.323031    8948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:58:54.471627    8948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 18:58:54.623792    8948 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 18:58:54.636936    8948 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 18:58:54.653149    8948 start.go:543] Will wait 60s for crictl version
	I0229 18:58:54.665207    8948 ssh_runner.go:195] Run: which crictl
	I0229 18:58:54.687956    8948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:58:54.774584    8948 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0229 18:58:54.784143    8948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:58:54.845739    8948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:58:51.369259   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:53.377924   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:55.386488   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:53.974161   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:56.484394   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:54.909191    8948 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 25.0.3 ...
	I0229 18:58:54.917861    8948 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-996700 dig +short host.docker.internal
	I0229 18:58:55.200592    8948 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0229 18:58:55.215737    8948 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0229 18:58:55.236921    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:58:55.408765    8948 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 18:58:55.419052    8948 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:58:55.469666    8948 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:58:55.469698    8948 docker.go:615] Images already preloaded, skipping extraction
	I0229 18:58:55.481425    8948 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:58:55.530895    8948 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:58:55.530936    8948 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:58:55.542182    8948 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:58:55.645497    8948 cni.go:84] Creating CNI manager for ""
	I0229 18:58:55.646135    8948 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:58:55.646135    8948 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:58:55.646217    8948 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-996700 NodeName:kubernetes-upgrade-996700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:58:55.646404    8948 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-996700"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:58:55.646569    8948 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-996700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-996700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:58:55.662110    8948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 18:58:55.684334    8948 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:58:55.697200    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:58:55.718192    8948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I0229 18:58:55.760378    8948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 18:58:55.802297    8948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2113 bytes)
	I0229 18:58:55.848424    8948 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0229 18:58:55.862846    8948 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700 for IP: 192.168.85.2
	I0229 18:58:55.862846    8948 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:58:55.864675    8948 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0229 18:58:55.865301    8948 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0229 18:58:55.865960    8948 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\client.key
	I0229 18:58:55.865960    8948 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.key.43b9df8c
	I0229 18:58:55.866772    8948 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\proxy-client.key
	I0229 18:58:55.868042    8948 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem (1338 bytes)
	W0229 18:58:55.868715    8948 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660_empty.pem, impossibly tiny 0 bytes
	I0229 18:58:55.868842    8948 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0229 18:58:55.869153    8948 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0229 18:58:55.869480    8948 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 18:58:55.869959    8948 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 18:58:55.870668    8948 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem (1708 bytes)
	I0229 18:58:55.872692    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:58:55.923287    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:58:56.108694    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:58:56.236388    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-996700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:58:56.427863    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:58:56.535087    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:58:56.651892    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:58:56.728094    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 18:58:56.774932    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /usr/share/ca-certificates/56602.pem (1708 bytes)
	I0229 18:58:56.823240    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:58:56.883680    8948 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem --> /usr/share/ca-certificates/5660.pem (1338 bytes)
	I0229 18:58:57.006019    8948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:58:57.125802    8948 ssh_runner.go:195] Run: openssl version
	I0229 18:58:57.232317    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:58:57.338681    8948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:57.407973    8948 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:57.427357    8948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:58:57.528858    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:58:57.644725    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5660.pem && ln -fs /usr/share/ca-certificates/5660.pem /etc/ssl/certs/5660.pem"
	I0229 18:58:57.745455    8948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5660.pem
	I0229 18:58:57.802763    8948 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:50 /usr/share/ca-certificates/5660.pem
	I0229 18:58:57.823630    8948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5660.pem
	I0229 18:58:57.924696    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5660.pem /etc/ssl/certs/51391683.0"
	I0229 18:58:58.028124    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56602.pem && ln -fs /usr/share/ca-certificates/56602.pem /etc/ssl/certs/56602.pem"
	I0229 18:58:58.139129    8948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56602.pem
	I0229 18:58:58.210157    8948 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:50 /usr/share/ca-certificates/56602.pem
	I0229 18:58:58.222990    8948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56602.pem
	I0229 18:58:58.259874    8948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56602.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:58:58.324193    8948 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:58:58.350441    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:58:58.429546    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:58:58.522123    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:58:58.557329    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:58:58.619657    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:58:58.649678    8948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:58:58.706158    8948 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-996700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-996700 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:58:58.717486    8948 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:58:58.821559    8948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:58:58.847481    8948 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:58:58.847538    8948 kubeadm.go:636] restartCluster start
	I0229 18:58:58.863663    8948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:58:58.912647    8948 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:58:58.923635    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:58:59.101241    8948 kubeconfig.go:92] found "kubernetes-upgrade-996700" server: "https://127.0.0.1:59922"
	I0229 18:58:59.104402    8948 kapi.go:59] client config for kubernetes-upgrade-996700: &rest.Config{Host:"https://127.0.0.1:59922", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-996700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-996700\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), K
eyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1dd0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:58:59.122157    8948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:58:59.207515    8948 api_server.go:166] Checking apiserver status ...
	I0229 18:58:59.221050    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:58:59.258823    8948 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4349/cgroup
	I0229 18:58:59.308663    8948 api_server.go:182] apiserver freezer: "21:freezer:/docker/912cacb5f1d47c7e74c6e39fc3ec08cbbd93bad0ee1ab235923283add9cfa94c/kubepods/burstable/pod4daaa3b611b3fee13418d577c365aa5c/8e97c8b0663b02d39520251834061c81d0ce6d0cd3fca5d1471e1ef2368ff403"
	I0229 18:58:59.321216    8948 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/912cacb5f1d47c7e74c6e39fc3ec08cbbd93bad0ee1ab235923283add9cfa94c/kubepods/burstable/pod4daaa3b611b3fee13418d577c365aa5c/8e97c8b0663b02d39520251834061c81d0ce6d0cd3fca5d1471e1ef2368ff403/freezer.state
	I0229 18:58:59.343959    8948 api_server.go:204] freezer state: "THAWED"
	I0229 18:58:59.344014    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:58:57.870761   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:59.871831   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:58:58.973881   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:00.977378   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:01.617455    8948 api_server.go:279] https://127.0.0.1:59922/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:59:01.617455    8948 retry.go:31] will retry after 242.124006ms: https://127.0.0.1:59922/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:59:01.869445    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:59:01.878328    8948 api_server.go:279] https://127.0.0.1:59922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:01.878328    8948 retry.go:31] will retry after 282.553238ms: https://127.0.0.1:59922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:02.175265    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:59:02.187662    8948 api_server.go:279] https://127.0.0.1:59922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:02.187662    8948 retry.go:31] will retry after 331.232753ms: https://127.0.0.1:59922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:02.528498    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:59:02.540645    8948 api_server.go:279] https://127.0.0.1:59922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:02.541689    8948 retry.go:31] will retry after 450.141705ms: https://127.0.0.1:59922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:03.000394    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:59:03.015449    8948 api_server.go:279] https://127.0.0.1:59922/healthz returned 200:
	ok
	I0229 18:59:03.055087    8948 system_pods.go:86] 5 kube-system pods found
	I0229 18:59:03.055087    8948 system_pods.go:89] "etcd-kubernetes-upgrade-996700" [3167b379-deeb-49dd-907d-b20a357ec865] Running
	I0229 18:59:03.055087    8948 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-996700" [03015e1d-6d15-404c-a896-73d0e037769c] Running
	I0229 18:59:03.055630    8948 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-996700" [eb22f7ed-37fd-484f-b7fe-a88d9a556bba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:59:03.055694    8948 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-996700" [56160ee8-b7ef-4faf-9af9-afe2a971c666] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:59:03.055756    8948 system_pods.go:89] "storage-provisioner" [19120b80-6d95-42e2-8f3f-bdd377e524f7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0229 18:59:03.055814    8948 kubeadm.go:620] needs reconfigure: missing components: kube-dns, kube-proxy
	I0229 18:59:03.055869    8948 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:59:03.064542    8948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:59:03.102049    8948 docker.go:483] Stopping containers: [8e97c8b0663b 64c32c61bf5d 6a91c8ddc573 1c9917c5d0ae b26b87d6c44d f15b56fd3e40 2cd3e320c490 5fea30a66922 3bfc1c841801 fdd041ca73c4 a3cfc42914ef 057034593acf af1f1a455060 4dd773095a35 b965568de4c1 7ddaffa0aa6a 73e2981a9bb3 9229b9dd14c3 31b80572c164 a4649a470559]
	I0229 18:59:03.112644    8948 ssh_runner.go:195] Run: docker stop 8e97c8b0663b 64c32c61bf5d 6a91c8ddc573 1c9917c5d0ae b26b87d6c44d f15b56fd3e40 2cd3e320c490 5fea30a66922 3bfc1c841801 fdd041ca73c4 a3cfc42914ef 057034593acf af1f1a455060 4dd773095a35 b965568de4c1 7ddaffa0aa6a 73e2981a9bb3 9229b9dd14c3 31b80572c164 a4649a470559
	I0229 18:59:04.457817    8948 ssh_runner.go:235] Completed: docker stop 8e97c8b0663b 64c32c61bf5d 6a91c8ddc573 1c9917c5d0ae b26b87d6c44d f15b56fd3e40 2cd3e320c490 5fea30a66922 3bfc1c841801 fdd041ca73c4 a3cfc42914ef 057034593acf af1f1a455060 4dd773095a35 b965568de4c1 7ddaffa0aa6a 73e2981a9bb3 9229b9dd14c3 31b80572c164 a4649a470559: (1.3450461s)
	I0229 18:59:04.478219    8948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:59:02.370398   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:04.371653   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:03.472848   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:05.482777   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:07.976085   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:04.734478    8948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:59:04.766841    8948 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5707 Feb 29 18:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Feb 29 18:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5819 Feb 29 18:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5691 Feb 29 18:53 /etc/kubernetes/scheduler.conf
	
	I0229 18:59:04.784583    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0229 18:59:04.852888    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0229 18:59:04.946929    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0229 18:59:05.042514    8948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0229 18:59:05.224929    8948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:59:05.313430    8948 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:59:05.313485    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:05.456455    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:07.223538    8948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.7670065s)
	I0229 18:59:07.223538    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:07.463440    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:07.556087    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:07.651347    8948 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:59:07.664796    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:08.176354    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:08.665947    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:09.167125    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:09.218212    8948 api_server.go:72] duration metric: took 1.5668525s to wait for apiserver process to appear ...
	I0229 18:59:09.218212    8948 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:59:09.218475    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:59:09.224260    8948 api_server.go:269] stopped: https://127.0.0.1:59922/healthz: Get "https://127.0.0.1:59922/healthz": EOF
	I0229 18:59:06.387166   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:08.880711   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:10.006941   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:12.482795   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:09.731120    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:59:13.109710    8948 api_server.go:279] https://127.0.0.1:59922/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:59:13.109898    8948 api_server.go:103] status: https://127.0.0.1:59922/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:59:13.109965    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:59:13.213372    8948 api_server.go:279] https://127.0.0.1:59922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:13.213462    8948 api_server.go:103] status: https://127.0.0.1:59922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:13.219652    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:59:13.310375    8948 api_server.go:279] https://127.0.0.1:59922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:13.310375    8948 api_server.go:103] status: https://127.0.0.1:59922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:13.722798    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:59:13.738038    8948 api_server.go:279] https://127.0.0.1:59922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:13.738038    8948 api_server.go:103] status: https://127.0.0.1:59922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:14.219962    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:59:14.236198    8948 api_server.go:279] https://127.0.0.1:59922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:59:14.236198    8948 api_server.go:103] status: https://127.0.0.1:59922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:59:14.733437    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:59:14.745942    8948 api_server.go:279] https://127.0.0.1:59922/healthz returned 200:
	ok
	I0229 18:59:14.762864    8948 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:59:14.762864    8948 api_server.go:131] duration metric: took 5.5446068s to wait for apiserver health ...
	I0229 18:59:14.762952    8948 cni.go:84] Creating CNI manager for ""
	I0229 18:59:14.762952    8948 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:59:14.765622    8948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:59:14.776570    8948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:59:14.800942    8948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:59:14.833404    8948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:59:14.845390    8948 system_pods.go:59] 5 kube-system pods found
	I0229 18:59:14.845483    8948 system_pods.go:61] "etcd-kubernetes-upgrade-996700" [3167b379-deeb-49dd-907d-b20a357ec865] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:59:14.845483    8948 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-996700" [03015e1d-6d15-404c-a896-73d0e037769c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:59:14.845483    8948 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-996700" [eb22f7ed-37fd-484f-b7fe-a88d9a556bba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:59:14.845537    8948 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-996700" [56160ee8-b7ef-4faf-9af9-afe2a971c666] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:59:14.845537    8948 system_pods.go:61] "storage-provisioner" [19120b80-6d95-42e2-8f3f-bdd377e524f7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0229 18:59:14.845537    8948 system_pods.go:74] duration metric: took 12.1043ms to wait for pod list to return data ...
	I0229 18:59:14.845578    8948 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:59:14.853606    8948 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0229 18:59:14.853606    8948 node_conditions.go:123] node cpu capacity is 16
	I0229 18:59:14.853606    8948 node_conditions.go:105] duration metric: took 8.0273ms to run NodePressure ...
	I0229 18:59:14.853732    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:59:15.228381    8948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 18:59:15.248974    8948 ops.go:34] apiserver oom_adj: -16
	I0229 18:59:15.248974    8948 kubeadm.go:640] restartCluster took 16.4013034s
	I0229 18:59:15.248974    8948 kubeadm.go:406] StartCluster complete in 16.5426814s
	I0229 18:59:15.248974    8948 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:59:15.248974    8948 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 18:59:15.250997    8948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:59:15.252832    8948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:59:15.252960    8948 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:59:15.253169    8948 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-996700"
	I0229 18:59:15.253257    8948 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-996700"
	I0229 18:59:15.253257    8948 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-996700"
	I0229 18:59:15.253257    8948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-996700"
	W0229 18:59:15.253257    8948 addons.go:243] addon storage-provisioner should already be in state true
	I0229 18:59:15.253396    8948 host.go:66] Checking if "kubernetes-upgrade-996700" exists ...
	I0229 18:59:15.253722    8948 config.go:182] Loaded profile config "kubernetes-upgrade-996700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:59:15.275986    8948 kapi.go:59] client config for kubernetes-upgrade-996700: &rest.Config{Host:"https://127.0.0.1:59922", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-996700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-996700\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), K
eyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1dd0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:59:15.286025    8948 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-996700 --format={{.State.Status}}
	I0229 18:59:15.286882    8948 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-996700 --format={{.State.Status}}
	I0229 18:59:15.289147    8948 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-996700" context rescaled to 1 replicas
	I0229 18:59:15.289213    8948 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:59:15.292279    8948 out.go:177] * Verifying Kubernetes components...
	I0229 18:59:15.322418    8948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:59:15.509671    8948 kapi.go:59] client config for kubernetes-upgrade-996700: &rest.Config{Host:"https://127.0.0.1:59922", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-996700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-996700\\client.key", CAFile:"C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), K
eyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1dd0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:59:15.511342    8948 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-996700"
	W0229 18:59:15.511342    8948 addons.go:243] addon default-storageclass should already be in state true
	I0229 18:59:15.511342    8948 host.go:66] Checking if "kubernetes-upgrade-996700" exists ...
	I0229 18:59:15.525028    8948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:59:11.381763   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:13.878185   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:15.881278   15352 pod_ready.go:102] pod "metrics-server-57f55c9bc5-224sh" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:15.527943    8948 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:59:15.527943    8948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:59:15.543380    8948 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-996700 --format={{.State.Status}}
	I0229 18:59:15.543698    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:59:15.636449    8948 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 18:59:15.652708    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:59:15.747553    8948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59923 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa Username:docker}
	I0229 18:59:15.764271    8948 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:59:15.764271    8948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:59:15.781130    8948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-996700
	I0229 18:59:15.886191    8948 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:59:15.906929    8948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:59:15.949421    8948 api_server.go:72] duration metric: took 660.1509ms to wait for apiserver process to appear ...
	I0229 18:59:15.949491    8948 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:59:15.949491    8948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59922/healthz ...
	I0229 18:59:15.981261    8948 api_server.go:279] https://127.0.0.1:59922/healthz returned 200:
	ok
	I0229 18:59:15.981261    8948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:59:15.987402    8948 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:59:15.987402    8948 api_server.go:131] duration metric: took 37.9112ms to wait for apiserver health ...
	I0229 18:59:15.987402    8948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:59:16.010958    8948 system_pods.go:59] 5 kube-system pods found
	I0229 18:59:16.011069    8948 system_pods.go:61] "etcd-kubernetes-upgrade-996700" [3167b379-deeb-49dd-907d-b20a357ec865] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:59:16.011069    8948 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-996700" [03015e1d-6d15-404c-a896-73d0e037769c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:59:16.011194    8948 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-996700" [eb22f7ed-37fd-484f-b7fe-a88d9a556bba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:59:16.011194    8948 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-996700" [56160ee8-b7ef-4faf-9af9-afe2a971c666] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:59:16.011194    8948 system_pods.go:61] "storage-provisioner" [19120b80-6d95-42e2-8f3f-bdd377e524f7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0229 18:59:16.011257    8948 system_pods.go:74] duration metric: took 23.8545ms to wait for pod list to return data ...
	I0229 18:59:16.011257    8948 kubeadm.go:581] duration metric: took 722.0385ms to wait for : map[apiserver:true system_pods:true] ...
	I0229 18:59:16.011321    8948 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:59:16.021399    8948 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0229 18:59:16.021399    8948 node_conditions.go:123] node cpu capacity is 16
	I0229 18:59:16.021399    8948 node_conditions.go:105] duration metric: took 10.0776ms to run NodePressure ...
	I0229 18:59:16.021399    8948 start.go:228] waiting for startup goroutines ...
	I0229 18:59:16.025948    8948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59923 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubernetes-upgrade-996700\id_rsa Username:docker}
	I0229 18:59:16.245024    8948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:59:16.978578    8948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 18:59:16.981638    8948 addons.go:505] enable addons completed in 1.7286642s: enabled=[storage-provisioner default-storageclass]
	I0229 18:59:16.981638    8948 start.go:233] waiting for cluster config update ...
	I0229 18:59:16.981638    8948 start.go:242] writing updated cluster config ...
	I0229 18:59:17.002450    8948 ssh_runner.go:195] Run: rm -f paused
	I0229 18:59:17.167202    8948 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 18:59:17.171003    8948 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-996700" cluster and "default" namespace by default
	I0229 18:59:14.980342   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:59:16.980973   10824 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qqqh2" in "kube-system" namespace has status "Ready":"False"
	
	
	==> Docker <==
	Feb 29 18:58:54 kubernetes-upgrade-996700 cri-dockerd[3759]: time="2024-02-29T18:58:54Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 29 18:58:54 kubernetes-upgrade-996700 cri-dockerd[3759]: time="2024-02-29T18:58:54Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 29 18:58:54 kubernetes-upgrade-996700 cri-dockerd[3759]: time="2024-02-29T18:58:54Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 29 18:58:54 kubernetes-upgrade-996700 cri-dockerd[3759]: time="2024-02-29T18:58:54Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 29 18:58:54 kubernetes-upgrade-996700 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 29 18:58:56 kubernetes-upgrade-996700 cri-dockerd[3759]: time="2024-02-29T18:58:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5fea30a66922b3c88c999d9ded8deba11c919f8f243cc82f2a71e9df1836552a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 18:58:56 kubernetes-upgrade-996700 cri-dockerd[3759]: time="2024-02-29T18:58:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b26b87d6c44de3fa02ea7c93d858aad55e7299f304df51ffd514ec3bb51b3dd4/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 18:58:56 kubernetes-upgrade-996700 cri-dockerd[3759]: time="2024-02-29T18:58:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f15b56fd3e4098c7ca1bc8b9a29b29e4a30c77109b23a07cc3f284f7db5feee6/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 18:58:56 kubernetes-upgrade-996700 cri-dockerd[3759]: time="2024-02-29T18:58:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2cd3e320c4906d216d1396d92ef255d35a4e7da3c12054632efbc68e94389705/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 18:59:03 kubernetes-upgrade-996700 dockerd[3499]: time="2024-02-29T18:59:03.402408538Z" level=info msg="ignoring event" container=5fea30a66922b3c88c999d9ded8deba11c919f8f243cc82f2a71e9df1836552a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 18:59:03 kubernetes-upgrade-996700 dockerd[3499]: time="2024-02-29T18:59:03.413420619Z" level=info msg="ignoring event" container=2cd3e320c4906d216d1396d92ef255d35a4e7da3c12054632efbc68e94389705 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 18:59:03 kubernetes-upgrade-996700 dockerd[3499]: time="2024-02-29T18:59:03.420442861Z" level=info msg="ignoring event" container=f15b56fd3e4098c7ca1bc8b9a29b29e4a30c77109b23a07cc3f284f7db5feee6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 18:59:03 kubernetes-upgrade-996700 dockerd[3499]: time="2024-02-29T18:59:03.422878146Z" level=info msg="ignoring event" container=6a91c8ddc573fb6fdfe8e10d7f93dddf7783e9d758daa8bd7f9cb1d647b5c122 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 18:59:03 kubernetes-upgrade-996700 dockerd[3499]: time="2024-02-29T18:59:03.423048251Z" level=info msg="ignoring event" container=b26b87d6c44de3fa02ea7c93d858aad55e7299f304df51ffd514ec3bb51b3dd4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 18:59:03 kubernetes-upgrade-996700 dockerd[3499]: time="2024-02-29T18:59:03.523565525Z" level=info msg="ignoring event" container=64c32c61bf5dc5daa1c4d1602b58bc59a014b35fcea74194fdd099ce9543faef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 18:59:03 kubernetes-upgrade-996700 dockerd[3499]: time="2024-02-29T18:59:03.603451286Z" level=info msg="ignoring event" container=1c9917c5d0aec440c1548afba03c35cbe302366fb5a821e8a5529901b22b15d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 18:59:04 kubernetes-upgrade-996700 dockerd[3499]: time="2024-02-29T18:59:04.350701713Z" level=info msg="ignoring event" container=8e97c8b0663b02d39520251834061c81d0ce6d0cd3fca5d1471e1ef2368ff403 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 18:59:05 kubernetes-upgrade-996700 cri-dockerd[3759]: time="2024-02-29T18:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fe448f7197c44a3300453c156312cc7ee427b6923ad7a6d1fd7060ba37ccec90/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 18:59:05 kubernetes-upgrade-996700 cri-dockerd[3759]: W0229 18:59:05.305395    3759 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 29 18:59:05 kubernetes-upgrade-996700 cri-dockerd[3759]: time="2024-02-29T18:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2020961fd090d8c4b380ed425727d9aeacfe4922d9e03ead07b537498de8b3da/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 18:59:05 kubernetes-upgrade-996700 cri-dockerd[3759]: W0229 18:59:05.310957    3759 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 29 18:59:05 kubernetes-upgrade-996700 cri-dockerd[3759]: time="2024-02-29T18:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/074c7a37ba3f15e404f32b3554243d8a116225e06518c96fd401107d3b2c8fe8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 18:59:05 kubernetes-upgrade-996700 cri-dockerd[3759]: W0229 18:59:05.403291    3759 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Feb 29 18:59:05 kubernetes-upgrade-996700 cri-dockerd[3759]: time="2024-02-29T18:59:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/43d884e50f31b2e9eb42d3582662bb6ded027476dc43b8e380562967a107d6ea/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Feb 29 18:59:05 kubernetes-upgrade-996700 cri-dockerd[3759]: W0229 18:59:05.430245    3759 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b469ba5b8e886       4270645ed6b7a       12 seconds ago      Running             kube-scheduler            2                   074c7a37ba3f1       kube-scheduler-kubernetes-upgrade-996700
	bc2da0a4f6648       d4e01cdf63970       12 seconds ago      Running             kube-controller-manager   2                   fe448f7197c44       kube-controller-manager-kubernetes-upgrade-996700
	ce9e5bc99ff98       bbb47a0f83324       12 seconds ago      Running             kube-apiserver            2                   43d884e50f31b       kube-apiserver-kubernetes-upgrade-996700
	87499e4d79492       a0eed15eed449       12 seconds ago      Running             etcd                      2                   2020961fd090d       etcd-kubernetes-upgrade-996700
	8e97c8b0663b0       bbb47a0f83324       24 seconds ago      Exited              kube-apiserver            1                   f15b56fd3e409       kube-apiserver-kubernetes-upgrade-996700
	64c32c61bf5dc       d4e01cdf63970       24 seconds ago      Exited              kube-controller-manager   1                   2cd3e320c4906       kube-controller-manager-kubernetes-upgrade-996700
	6a91c8ddc573f       4270645ed6b7a       24 seconds ago      Exited              kube-scheduler            1                   5fea30a66922b       kube-scheduler-kubernetes-upgrade-996700
	1c9917c5d0aec       a0eed15eed449       24 seconds ago      Exited              etcd                      1                   b26b87d6c44de       etcd-kubernetes-upgrade-996700
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-996700
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-996700
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 18:58:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-996700
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 18:59:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 18:59:13 +0000   Thu, 29 Feb 2024 18:58:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 18:59:13 +0000   Thu, 29 Feb 2024 18:58:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 18:59:13 +0000   Thu, 29 Feb 2024 18:58:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 18:59:13 +0000   Thu, 29 Feb 2024 18:58:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    kubernetes-upgrade-996700
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868668Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868668Ki
	  pods:               110
	System Info:
	  Machine ID:                 eef5e1afe77d437d9d08010f67f5ccdb
	  System UUID:                eef5e1afe77d437d9d08010f67f5ccdb
	  Boot ID:                    d6e19e81-4b60-457d-ba23-5f12408b314c
	  Kernel Version:             5.15.133.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://25.0.3
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-996700                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         50s
	  kube-system                 kube-apiserver-kubernetes-upgrade-996700             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-996700    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-scheduler-kubernetes-upgrade-996700             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 57s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet  Node kubernetes-upgrade-996700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet  Node kubernetes-upgrade-996700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x7 over 57s)  kubelet  Node kubernetes-upgrade-996700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  57s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 13s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 13s)  kubelet  Node kubernetes-upgrade-996700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 13s)  kubelet  Node kubernetes-upgrade-996700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x7 over 13s)  kubelet  Node kubernetes-upgrade-996700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[Feb29 18:34] hrtimer: interrupt took 3342226 ns
	
	
	==> etcd [1c9917c5d0ae] <==
	{"level":"info","ts":"2024-02-29T18:58:57.516381Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T18:58:58.406779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T18:58:58.406983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T18:58:58.40718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-02-29T18:58:58.40721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T18:58:58.407286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-02-29T18:58:58.407464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2024-02-29T18:58:58.407545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-02-29T18:58:58.411596Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:kubernetes-upgrade-996700 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:58:58.411733Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:58:58.41185Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:58:58.412325Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T18:58:58.412371Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T18:58:58.418864Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-02-29T18:58:58.420608Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T18:59:03.301689Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-29T18:59:03.30183Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-996700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"warn","ts":"2024-02-29T18:59:03.301987Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:59:03.302141Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:59:03.31318Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:59:03.313286Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-29T18:59:03.410216Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2024-02-29T18:59:03.4222Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-02-29T18:59:03.422682Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-02-29T18:59:03.422777Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-996700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [87499e4d7949] <==
	{"level":"info","ts":"2024-02-29T18:59:09.242997Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:59:09.243006Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:59:09.302355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2024-02-29T18:59:09.303266Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2024-02-29T18:59:09.304176Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:59:09.304305Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:59:09.307562Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T18:59:09.307922Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-02-29T18:59:09.307968Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-02-29T18:59:09.307926Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T18:59:09.307996Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T18:59:11.006712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-29T18:59:11.006866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-29T18:59:11.006925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-02-29T18:59:11.006941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 4"}
	{"level":"info","ts":"2024-02-29T18:59:11.006948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2024-02-29T18:59:11.006957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 4"}
	{"level":"info","ts":"2024-02-29T18:59:11.006965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
	{"level":"info","ts":"2024-02-29T18:59:11.011899Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:kubernetes-upgrade-996700 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:59:11.011978Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:59:11.01209Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:59:11.012693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T18:59:11.013061Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T18:59:11.015274Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T18:59:11.0153Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 18:59:20 up  3:04,  0 users,  load average: 3.02, 4.13, 4.37
	Linux kubernetes-upgrade-996700 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [8e97c8b0663b] <==
	W0229 18:59:04.306529       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.306574       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.306594       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.306694       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.306756       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.306792       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.306851       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.306965       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.306995       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.307036       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.307049       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.307577       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.307776       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.307982       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.308012       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.308257       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.308362       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.308331       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.308458       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.308476       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.308764       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.308901       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.308993       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.309039       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:59:04.309352       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ce9e5bc99ff9] <==
	I0229 18:59:13.041838       1 controller.go:116] Starting legacy_token_tracking_controller
	I0229 18:59:13.041877       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0229 18:59:13.041903       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0229 18:59:13.041932       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0229 18:59:13.211095       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 18:59:13.216409       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 18:59:13.301269       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 18:59:13.301367       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 18:59:13.301411       1 aggregator.go:165] initial CRD sync complete...
	I0229 18:59:13.301384       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 18:59:13.301426       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 18:59:13.301433       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 18:59:13.301440       1 cache.go:39] Caches are synced for autoregister controller
	I0229 18:59:13.301608       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 18:59:13.301905       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0229 18:59:13.302021       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0229 18:59:13.301926       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 18:59:14.044780       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0229 18:59:14.535430       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0229 18:59:14.537138       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 18:59:15.009243       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 18:59:15.046971       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 18:59:15.142589       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 18:59:15.187925       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 18:59:15.199887       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [64c32c61bf5d] <==
	I0229 18:58:59.446252       1 serving.go:380] Generated self-signed cert in-memory
	I0229 18:59:00.309651       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0229 18:59:00.309776       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:59:00.312284       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0229 18:59:00.312419       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0229 18:59:00.312837       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0229 18:59:00.314031       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [bc2da0a4f664] <==
	E0229 18:59:17.068730       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0229 18:59:17.068896       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0229 18:59:17.226937       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0229 18:59:17.227121       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0229 18:59:17.227137       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0229 18:59:17.368745       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0229 18:59:17.368928       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0229 18:59:17.369263       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0229 18:59:17.520850       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0229 18:59:17.521196       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0229 18:59:17.521215       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0229 18:59:17.672318       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0229 18:59:17.672497       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0229 18:59:17.672525       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0229 18:59:17.816671       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0229 18:59:17.816815       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0229 18:59:17.816834       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0229 18:59:17.965808       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0229 18:59:17.965997       1 stateful_set.go:161] "Starting stateful set controller"
	I0229 18:59:17.966017       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0229 18:59:18.120679       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0229 18:59:18.120814       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0229 18:59:18.120829       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0229 18:59:18.266063       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0229 18:59:18.266181       1 cleaner.go:83] "Starting CSR cleaner controller"
	
	
	==> kube-scheduler [6a91c8ddc573] <==
	I0229 18:58:59.513128       1 serving.go:380] Generated self-signed cert in-memory
	W0229 18:59:01.702107       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 18:59:01.702174       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 18:59:01.702190       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 18:59:01.702203       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 18:59:01.803476       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0229 18:59:01.803534       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:59:01.806232       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 18:59:01.806410       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 18:59:01.808545       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 18:59:01.808547       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 18:59:01.907884       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 18:59:03.302442       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0229 18:59:03.302628       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 18:59:03.302781       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0229 18:59:03.303292       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b469ba5b8e88] <==
	I0229 18:59:10.143304       1 serving.go:380] Generated self-signed cert in-memory
	W0229 18:59:13.106931       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 18:59:13.106999       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 18:59:13.107018       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 18:59:13.107030       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 18:59:13.317446       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0229 18:59:13.317540       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:59:13.320058       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 18:59:13.320268       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 18:59:13.320666       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 18:59:13.321340       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 18:59:13.421825       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 18:59:08 kubernetes-upgrade-996700 kubelet[4995]: E0229 18:59:08.333077    4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-996700?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="800ms"
	Feb 29 18:59:08 kubernetes-upgrade-996700 kubelet[4995]: I0229 18:59:08.334725    4995 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9f965f1bc8251599dfaa8e8c502ec08-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-996700\" (UID: \"a9f965f1bc8251599dfaa8e8c502ec08\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-996700"
	Feb 29 18:59:08 kubernetes-upgrade-996700 kubelet[4995]: I0229 18:59:08.503716    4995 scope.go:117] "RemoveContainer" containerID="1c9917c5d0aec440c1548afba03c35cbe302366fb5a821e8a5529901b22b15d5"
	Feb 29 18:59:08 kubernetes-upgrade-996700 kubelet[4995]: I0229 18:59:08.528224    4995 scope.go:117] "RemoveContainer" containerID="8e97c8b0663b02d39520251834061c81d0ce6d0cd3fca5d1471e1ef2368ff403"
	Feb 29 18:59:08 kubernetes-upgrade-996700 kubelet[4995]: I0229 18:59:08.539694    4995 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-996700"
	Feb 29 18:59:08 kubernetes-upgrade-996700 kubelet[4995]: E0229 18:59:08.541092    4995 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="kubernetes-upgrade-996700"
	Feb 29 18:59:08 kubernetes-upgrade-996700 kubelet[4995]: I0229 18:59:08.549395    4995 scope.go:117] "RemoveContainer" containerID="64c32c61bf5dc5daa1c4d1602b58bc59a014b35fcea74194fdd099ce9543faef"
	Feb 29 18:59:08 kubernetes-upgrade-996700 kubelet[4995]: I0229 18:59:08.561507    4995 scope.go:117] "RemoveContainer" containerID="6a91c8ddc573fb6fdfe8e10d7f93dddf7783e9d758daa8bd7f9cb1d647b5c122"
	Feb 29 18:59:08 kubernetes-upgrade-996700 kubelet[4995]: W0229 18:59:08.672188    4995 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 29 18:59:08 kubernetes-upgrade-996700 kubelet[4995]: E0229 18:59:08.672422    4995 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 29 18:59:08 kubernetes-upgrade-996700 kubelet[4995]: W0229 18:59:08.715978    4995 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-996700&limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 29 18:59:08 kubernetes-upgrade-996700 kubelet[4995]: E0229 18:59:08.716152    4995 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-996700&limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 29 18:59:09 kubernetes-upgrade-996700 kubelet[4995]: W0229 18:59:09.102426    4995 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 29 18:59:09 kubernetes-upgrade-996700 kubelet[4995]: E0229 18:59:09.102584    4995 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 29 18:59:09 kubernetes-upgrade-996700 kubelet[4995]: E0229 18:59:09.201546    4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-996700?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="1.6s"
	Feb 29 18:59:09 kubernetes-upgrade-996700 kubelet[4995]: W0229 18:59:09.201853    4995 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 29 18:59:09 kubernetes-upgrade-996700 kubelet[4995]: E0229 18:59:09.202031    4995 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.85.2:8443: connect: connection refused
	Feb 29 18:59:09 kubernetes-upgrade-996700 kubelet[4995]: I0229 18:59:09.414460    4995 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-996700"
	Feb 29 18:59:09 kubernetes-upgrade-996700 kubelet[4995]: E0229 18:59:09.414983    4995 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="kubernetes-upgrade-996700"
	Feb 29 18:59:11 kubernetes-upgrade-996700 kubelet[4995]: I0229 18:59:11.102603    4995 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-996700"
	Feb 29 18:59:13 kubernetes-upgrade-996700 kubelet[4995]: I0229 18:59:13.410769    4995 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-996700"
	Feb 29 18:59:13 kubernetes-upgrade-996700 kubelet[4995]: I0229 18:59:13.411112    4995 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-996700"
	Feb 29 18:59:13 kubernetes-upgrade-996700 kubelet[4995]: I0229 18:59:13.703109    4995 apiserver.go:52] "Watching apiserver"
	Feb 29 18:59:13 kubernetes-upgrade-996700 kubelet[4995]: I0229 18:59:13.727852    4995 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 29 18:59:13 kubernetes-upgrade-996700 kubelet[4995]: E0229 18:59:13.781321    4995 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-996700\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-996700"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:59:19.034045   10528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-996700 -n kubernetes-upgrade-996700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-996700 -n kubernetes-upgrade-996700: (1.4561143s)
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-996700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-996700 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-996700 describe pod storage-provisioner: exit status 1 (184.2745ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-996700 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-996700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-996700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-996700: (6.1893957s)
--- FAIL: TestKubernetesUpgrade (717.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (563.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-718400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
E0229 18:51:58.515985    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-718400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: exit status 109 (9m20.5338858s)

                                                
                                                
-- stdout --
	* [old-k8s-version-718400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-718400 in cluster old-k8s-version-718400
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 29 19:00:25 old-k8s-version-718400 kubelet[5946]: E0229 19:00:25.884918    5946 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:00:28 old-k8s-version-718400 kubelet[5946]: E0229 19:00:28.885077    5946 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:00:28 old-k8s-version-718400 kubelet[5946]: E0229 19:00:28.886524    5946 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:51:28.403867    7604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 18:51:28.502234    7604 out.go:291] Setting OutFile to fd 1148 ...
	I0229 18:51:28.503295    7604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:51:28.503295    7604 out.go:304] Setting ErrFile to fd 1320...
	I0229 18:51:28.503295    7604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:51:28.543361    7604 out.go:298] Setting JSON to false
	I0229 18:51:28.549367    7604 start.go:129] hostinfo: {"hostname":"minikube7","uptime":10648,"bootTime":1709222039,"procs":200,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 18:51:28.549367    7604 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 18:51:28.553369    7604 out.go:177] * [old-k8s-version-718400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 18:51:28.558380    7604 notify.go:220] Checking for updates...
	I0229 18:51:28.563381    7604 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 18:51:28.568361    7604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:51:28.572419    7604 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 18:51:28.579380    7604 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:51:28.588350    7604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:51:28.593349    7604 config.go:182] Loaded profile config "cert-expiration-080100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:51:28.593349    7604 config.go:182] Loaded profile config "cert-options-476400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:51:28.594364    7604 config.go:182] Loaded profile config "kubernetes-upgrade-996700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 18:51:28.594364    7604 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:51:28.936047    7604 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 18:51:28.950054    7604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 18:51:29.363054    7604 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:93 SystemTime:2024-02-29 18:51:29.319206859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 18:51:29.367063    7604 out.go:177] * Using the docker driver based on user configuration
	I0229 18:51:29.372050    7604 start.go:299] selected driver: docker
	I0229 18:51:29.372050    7604 start.go:903] validating driver "docker" against <nil>
	I0229 18:51:29.372050    7604 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:51:29.439328    7604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 18:51:29.825232    7604 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:93 SystemTime:2024-02-29 18:51:29.785754053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 18:51:29.825845    7604 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:51:29.827115    7604 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:51:29.832240    7604 out.go:177] * Using Docker Desktop driver with root privileges
	I0229 18:51:29.835414    7604 cni.go:84] Creating CNI manager for ""
	I0229 18:51:29.835414    7604 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:51:29.835414    7604 start_flags.go:323] config:
	{Name:old-k8s-version-718400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-718400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:51:29.839930    7604 out.go:177] * Starting control plane node old-k8s-version-718400 in cluster old-k8s-version-718400
	I0229 18:51:29.846096    7604 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 18:51:29.852106    7604 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 18:51:29.856099    7604 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 18:51:29.856099    7604 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 18:51:29.856989    7604 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 18:51:29.856989    7604 cache.go:56] Caching tarball of preloaded images
	I0229 18:51:29.856989    7604 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:51:29.857611    7604 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 18:51:29.857611    7604 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\config.json ...
	I0229 18:51:29.857611    7604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\config.json: {Name:mk1a45cfd5d674a963468abafc75e6698cf3965e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:51:30.042117    7604 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 18:51:30.042189    7604 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 18:51:30.042189    7604 cache.go:194] Successfully downloaded all kic artifacts
	I0229 18:51:30.042189    7604 start.go:365] acquiring machines lock for old-k8s-version-718400: {Name:mkb837b5f41f1d87e763b06f510aab2257e4f19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:51:30.042189    7604 start.go:369] acquired machines lock for "old-k8s-version-718400" in 0s
	I0229 18:51:30.042189    7604 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-718400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-718400 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:51:30.042816    7604 start.go:125] createHost starting for "" (driver="docker")
	I0229 18:51:30.048710    7604 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0229 18:51:30.049365    7604 start.go:159] libmachine.API.Create for "old-k8s-version-718400" (driver="docker")
	I0229 18:51:30.049555    7604 client.go:168] LocalClient.Create starting
	I0229 18:51:30.050008    7604 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0229 18:51:30.050554    7604 main.go:141] libmachine: Decoding PEM data...
	I0229 18:51:30.050554    7604 main.go:141] libmachine: Parsing certificate...
	I0229 18:51:30.050695    7604 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0229 18:51:30.050695    7604 main.go:141] libmachine: Decoding PEM data...
	I0229 18:51:30.050695    7604 main.go:141] libmachine: Parsing certificate...
	I0229 18:51:30.068455    7604 cli_runner.go:164] Run: docker network inspect old-k8s-version-718400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 18:51:30.280528    7604 cli_runner.go:211] docker network inspect old-k8s-version-718400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 18:51:30.294550    7604 network_create.go:281] running [docker network inspect old-k8s-version-718400] to gather additional debugging logs...
	I0229 18:51:30.294550    7604 cli_runner.go:164] Run: docker network inspect old-k8s-version-718400
	W0229 18:51:30.496934    7604 cli_runner.go:211] docker network inspect old-k8s-version-718400 returned with exit code 1
	I0229 18:51:30.496934    7604 network_create.go:284] error running [docker network inspect old-k8s-version-718400]: docker network inspect old-k8s-version-718400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-718400 not found
	I0229 18:51:30.496934    7604 network_create.go:286] output of [docker network inspect old-k8s-version-718400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-718400 not found
	
	** /stderr **
	I0229 18:51:30.508930    7604 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 18:51:30.758820    7604 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 18:51:30.790210    7604 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 18:51:30.822268    7604 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 18:51:30.853345    7604 network.go:210] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 18:51:30.884292    7604 network.go:210] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 18:51:30.906109    7604 network.go:207] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022578c0}
	I0229 18:51:30.906109    7604 network_create.go:124] attempt to create docker network old-k8s-version-718400 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0229 18:51:30.915296    7604 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-718400 old-k8s-version-718400
	W0229 18:51:31.111411    7604 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-718400 old-k8s-version-718400 returned with exit code 1
	W0229 18:51:31.111411    7604 network_create.go:149] failed to create docker network old-k8s-version-718400 192.168.94.0/24 with gateway 192.168.94.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-718400 old-k8s-version-718400: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0229 18:51:31.111411    7604 network_create.go:116] failed to create docker network old-k8s-version-718400 192.168.94.0/24, will retry: subnet is taken
	I0229 18:51:31.150701    7604 network.go:210] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 18:51:31.179191    7604 network.go:207] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00087fd10}
	I0229 18:51:31.179287    7604 network_create.go:124] attempt to create docker network old-k8s-version-718400 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0229 18:51:31.188221    7604 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-718400 old-k8s-version-718400
	I0229 18:51:31.679318    7604 network_create.go:108] docker network old-k8s-version-718400 192.168.103.0/24 created
	I0229 18:51:31.679318    7604 kic.go:121] calculated static IP "192.168.103.2" for the "old-k8s-version-718400" container
	I0229 18:51:31.700319    7604 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 18:51:31.927498    7604 cli_runner.go:164] Run: docker volume create old-k8s-version-718400 --label name.minikube.sigs.k8s.io=old-k8s-version-718400 --label created_by.minikube.sigs.k8s.io=true
	I0229 18:51:32.132198    7604 oci.go:103] Successfully created a docker volume old-k8s-version-718400
	I0229 18:51:32.146199    7604 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-718400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-718400 --entrypoint /usr/bin/test -v old-k8s-version-718400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 18:51:34.022666    7604 cli_runner.go:217] Completed: docker run --rm --name old-k8s-version-718400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-718400 --entrypoint /usr/bin/test -v old-k8s-version-718400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (1.8763541s)
	I0229 18:51:34.022763    7604 oci.go:107] Successfully prepared a docker volume old-k8s-version-718400
	I0229 18:51:34.022880    7604 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 18:51:34.022880    7604 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 18:51:34.036913    7604 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-718400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 18:51:52.152271    7604 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-718400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (18.1152112s)
	I0229 18:51:52.152271    7604 kic.go:203] duration metric: took 18.129244 seconds to extract preloaded images to volume
	I0229 18:51:52.162500    7604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 18:51:52.511842    7604 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:93 SystemTime:2024-02-29 18:51:52.471960644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 18:51:52.521719    7604 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0229 18:51:52.878950    7604 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-718400 --name old-k8s-version-718400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-718400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-718400 --network old-k8s-version-718400 --ip 192.168.103.2 --volume old-k8s-version-718400:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0229 18:51:53.802959    7604 cli_runner.go:164] Run: docker container inspect old-k8s-version-718400 --format={{.State.Running}}
	I0229 18:51:53.993984    7604 cli_runner.go:164] Run: docker container inspect old-k8s-version-718400 --format={{.State.Status}}
	I0229 18:51:54.202094    7604 cli_runner.go:164] Run: docker exec old-k8s-version-718400 stat /var/lib/dpkg/alternatives/iptables
	I0229 18:51:54.470046    7604 oci.go:144] the created container "old-k8s-version-718400" has a running status.
	I0229 18:51:54.470143    7604 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa...
	I0229 18:51:54.658349    7604 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0229 18:51:54.885448    7604 cli_runner.go:164] Run: docker container inspect old-k8s-version-718400 --format={{.State.Status}}
	I0229 18:51:55.075446    7604 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0229 18:51:55.075446    7604 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-718400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0229 18:51:55.359165    7604 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa...
	I0229 18:51:58.115707    7604 cli_runner.go:164] Run: docker container inspect old-k8s-version-718400 --format={{.State.Status}}
	I0229 18:51:58.301880    7604 machine.go:88] provisioning docker machine ...
	I0229 18:51:58.301880    7604 ubuntu.go:169] provisioning hostname "old-k8s-version-718400"
	I0229 18:51:58.311941    7604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 18:51:58.497461    7604 main.go:141] libmachine: Using SSH client type: native
	I0229 18:51:58.507209    7604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59625 <nil> <nil>}
	I0229 18:51:58.507209    7604 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-718400 && echo "old-k8s-version-718400" | sudo tee /etc/hostname
	I0229 18:51:58.706651    7604 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-718400
	
	I0229 18:51:58.718356    7604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 18:51:58.885314    7604 main.go:141] libmachine: Using SSH client type: native
	I0229 18:51:58.885314    7604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59625 <nil> <nil>}
	I0229 18:51:58.885314    7604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-718400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-718400/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-718400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:51:59.068529    7604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:51:59.068529    7604 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0229 18:51:59.068529    7604 ubuntu.go:177] setting up certificates
	I0229 18:51:59.068529    7604 provision.go:83] configureAuth start
	I0229 18:51:59.078788    7604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-718400
	I0229 18:51:59.268767    7604 provision.go:138] copyHostCerts
	I0229 18:51:59.268862    7604 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0229 18:51:59.268862    7604 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0229 18:51:59.269606    7604 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0229 18:51:59.271749    7604 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0229 18:51:59.271749    7604 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0229 18:51:59.272299    7604 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 18:51:59.273403    7604 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0229 18:51:59.273403    7604 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0229 18:51:59.274104    7604 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 18:51:59.274879    7604 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-718400 san=[192.168.103.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-718400]
	I0229 18:51:59.891983    7604 provision.go:172] copyRemoteCerts
	I0229 18:51:59.903388    7604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:51:59.911218    7604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 18:52:00.090639    7604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59625 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa Username:docker}
	I0229 18:52:00.222811    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 18:52:00.268694    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 18:52:00.310733    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:52:00.350523    7604 provision.go:86] duration metric: configureAuth took 1.2819834s
	I0229 18:52:00.350523    7604 ubuntu.go:193] setting minikube options for container-runtime
	I0229 18:52:00.350523    7604 config.go:182] Loaded profile config "old-k8s-version-718400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 18:52:00.359502    7604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 18:52:00.539430    7604 main.go:141] libmachine: Using SSH client type: native
	I0229 18:52:00.540449    7604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59625 <nil> <nil>}
	I0229 18:52:00.540449    7604 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:52:00.707544    7604 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0229 18:52:00.707544    7604 ubuntu.go:71] root file system type: overlay
	I0229 18:52:00.707544    7604 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:52:00.718503    7604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 18:52:00.880101    7604 main.go:141] libmachine: Using SSH client type: native
	I0229 18:52:00.880809    7604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59625 <nil> <nil>}
	I0229 18:52:00.880972    7604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:52:01.080133    7604 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:52:01.090900    7604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 18:52:01.290725    7604 main.go:141] libmachine: Using SSH client type: native
	I0229 18:52:01.291518    7604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 59625 <nil> <nil>}
	I0229 18:52:01.291518    7604 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:52:02.684775    7604 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-29 18:52:01.063492005 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0229 18:52:02.684775    7604 machine.go:91] provisioned docker machine in 4.382859s
	I0229 18:52:02.684775    7604 client.go:171] LocalClient.Create took 32.6349549s
	I0229 18:52:02.684775    7604 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-718400" took 32.6351451s
	I0229 18:52:02.684775    7604 start.go:300] post-start starting for "old-k8s-version-718400" (driver="docker")
	I0229 18:52:02.684775    7604 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:52:02.697773    7604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:52:02.707774    7604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 18:52:02.882145    7604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59625 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa Username:docker}
	I0229 18:52:03.026493    7604 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:52:03.037546    7604 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 18:52:03.037546    7604 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 18:52:03.037546    7604 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 18:52:03.037546    7604 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 18:52:03.037546    7604 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0229 18:52:03.037546    7604 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0229 18:52:03.038552    7604 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem -> 56602.pem in /etc/ssl/certs
	I0229 18:52:03.049550    7604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:52:03.070825    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /etc/ssl/certs/56602.pem (1708 bytes)
	I0229 18:52:03.116641    7604 start.go:303] post-start completed in 431.8632ms
	I0229 18:52:03.130632    7604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-718400
	I0229 18:52:03.308281    7604 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\config.json ...
	I0229 18:52:03.328044    7604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 18:52:03.339168    7604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 18:52:03.512206    7604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59625 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa Username:docker}
	I0229 18:52:03.639987    7604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 18:52:03.653877    7604 start.go:128] duration metric: createHost completed in 33.6107882s
	I0229 18:52:03.653877    7604 start.go:83] releasing machines lock for "old-k8s-version-718400", held for 33.6114158s
	I0229 18:52:03.663302    7604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-718400
	I0229 18:52:03.846464    7604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:52:03.855362    7604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 18:52:03.856360    7604 ssh_runner.go:195] Run: cat /version.json
	I0229 18:52:03.867383    7604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 18:52:04.051953    7604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59625 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa Username:docker}
	I0229 18:52:04.066113    7604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59625 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa Username:docker}
	I0229 18:52:04.189838    7604 ssh_runner.go:195] Run: systemctl --version
	I0229 18:52:04.355100    7604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 18:52:04.381914    7604 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0229 18:52:04.403104    7604 start.go:419] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0229 18:52:04.413112    7604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 18:52:04.454825    7604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 18:52:04.485834    7604 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:52:04.485834    7604 start.go:475] detecting cgroup driver to use...
	I0229 18:52:04.485834    7604 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 18:52:04.485834    7604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:52:04.529451    7604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 18:52:04.564423    7604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:52:04.584916    7604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:52:04.595733    7604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:52:04.623905    7604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:52:04.662282    7604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:52:04.690494    7604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:52:04.723646    7604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:52:04.752922    7604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:52:04.784025    7604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:52:04.813138    7604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:52:04.843315    7604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:52:05.011375    7604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:52:05.199256    7604 start.go:475] detecting cgroup driver to use...
	I0229 18:52:05.199303    7604 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 18:52:05.213255    7604 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:52:05.239771    7604 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0229 18:52:05.251772    7604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:52:05.273782    7604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:52:05.338070    7604 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:52:05.362575    7604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:52:05.383989    7604 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:52:05.427831    7604 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:52:05.592078    7604 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:52:05.758521    7604 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:52:05.759072    7604 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:52:05.808769    7604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:52:05.985629    7604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:52:06.614690    7604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:52:06.684782    7604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:52:06.747056    7604 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0229 18:52:06.757651    7604 cli_runner.go:164] Run: docker exec -t old-k8s-version-718400 dig +short host.docker.internal
	I0229 18:52:07.048219    7604 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0229 18:52:07.061146    7604 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0229 18:52:07.073995    7604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:52:07.102000    7604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 18:52:07.295514    7604 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 18:52:07.306612    7604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:52:07.354610    7604 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:52:07.354610    7604 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 18:52:07.371576    7604 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:52:07.398589    7604 ssh_runner.go:195] Run: which lz4
	I0229 18:52:07.423603    7604 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:52:07.439584    7604 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:52:07.439584    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0229 18:52:22.008527    7604 docker.go:649] Took 14.600824 seconds to copy over tarball
	I0229 18:52:22.022485    7604 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:52:26.513400    7604 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.4907784s)
	I0229 18:52:26.513400    7604 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:52:26.606993    7604 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:52:26.629378    7604 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0229 18:52:26.678203    7604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:52:26.826110    7604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:52:35.893465    7604 ssh_runner.go:235] Completed: sudo systemctl restart docker: (9.0672822s)
	I0229 18:52:35.905435    7604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:52:35.952794    7604 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:52:35.952794    7604 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 18:52:35.952794    7604 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:52:35.969212    7604 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:52:35.976111    7604 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:52:35.981127    7604 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:52:35.981127    7604 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:52:35.981127    7604 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:52:35.982113    7604 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:52:35.986110    7604 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:52:35.990125    7604 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:52:35.994121    7604 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:52:35.995112    7604 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:52:35.999155    7604 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:52:36.000118    7604 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:52:36.000118    7604 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:52:36.000118    7604 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:52:36.001113    7604 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:52:36.004107    7604 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	W0229 18:52:36.085202    7604 image.go:187] authn lookup for registry.k8s.io/etcd:3.3.15-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:52:36.177593    7604 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:52:36.270814    7604 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:52:36.349537    7604 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:52:36.426223    7604 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:52:36.495227    7604 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	W0229 18:52:36.507405    7604 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:52:36.522834    7604 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:52:36.546364    7604 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:52:36.546364    7604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.3.15-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0229 18:52:36.546364    7604 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:52:36.557332    7604 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:52:36.569319    7604 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:52:36.569319    7604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0229 18:52:36.569319    7604 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:52:36.581322    7604 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:52:36.581322    7604 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:52:36.584321    7604 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:52:36.599340    7604 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0229 18:52:36.612355    7604 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	W0229 18:52:36.617713    7604 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:52:36.644779    7604 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0229 18:52:36.645784    7604 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:52:36.645784    7604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0229 18:52:36.645784    7604 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:52:36.645784    7604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0229 18:52:36.645784    7604 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:52:36.646788    7604 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:52:36.657790    7604 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:52:36.657790    7604 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:52:36.665781    7604 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:52:36.665781    7604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0229 18:52:36.665781    7604 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0229 18:52:36.674776    7604 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	W0229 18:52:36.723824    7604 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:52:36.748456    7604 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0229 18:52:36.748456    7604 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0229 18:52:36.748456    7604 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0229 18:52:36.857852    7604 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:52:36.917442    7604 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:52:36.956032    7604 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:52:36.960910    7604 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:52:36.960910    7604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0229 18:52:36.960910    7604 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:52:36.973003    7604 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0229 18:52:36.998911    7604 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:52:36.998911    7604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0229 18:52:36.998911    7604 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:52:37.009532    7604 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:52:37.014731    7604 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0229 18:52:37.050864    7604 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0229 18:52:37.050864    7604 cache_images.go:92] LoadImages completed in 1.0980617s
	W0229 18:52:37.051402    7604 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0: The system cannot find the file specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0: The system cannot find the file specified.
	I0229 18:52:37.062852    7604 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:52:37.173053    7604 cni.go:84] Creating CNI manager for ""
	I0229 18:52:37.173342    7604 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:52:37.173405    7604 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:52:37.173405    7604 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-718400 NodeName:old-k8s-version-718400 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:52:37.173405    7604 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-718400"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-718400
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.103.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:52:37.173405    7604 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-718400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-718400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:52:37.186672    7604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:52:37.210450    7604 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:52:37.221544    7604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:52:37.242939    7604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
	I0229 18:52:37.278135    7604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:52:37.311321    7604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I0229 18:52:37.361049    7604 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0229 18:52:37.374006    7604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:52:37.393776    7604 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400 for IP: 192.168.103.2
	I0229 18:52:37.393776    7604 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:52:37.394874    7604 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0229 18:52:37.395122    7604 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0229 18:52:37.395821    7604 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\client.key
	I0229 18:52:37.395821    7604 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\client.crt with IP's: []
	I0229 18:52:37.770576    7604 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\client.crt ...
	I0229 18:52:37.770576    7604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\client.crt: {Name:mkd2adfd0b3f11f2e278a26e3a103eedb635706a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:52:37.771551    7604 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\client.key ...
	I0229 18:52:37.771551    7604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\client.key: {Name:mka0ca0560b6d16bfbd53a4945d1e04a854a6e22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:52:37.773065    7604 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.key.33fce0b9
	I0229 18:52:37.773593    7604 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.crt.33fce0b9 with IP's: [192.168.103.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:52:38.137858    7604 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.crt.33fce0b9 ...
	I0229 18:52:38.137858    7604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.crt.33fce0b9: {Name:mk291c8d72b11e00348e90eaac979bcb17e0116f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:52:38.138858    7604 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.key.33fce0b9 ...
	I0229 18:52:38.138858    7604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.key.33fce0b9: {Name:mk6abe4cfbc4260ad930a933824ec52fe59ddd53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:52:38.139863    7604 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.crt.33fce0b9 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.crt
	I0229 18:52:38.151857    7604 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.key.33fce0b9 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.key
	I0229 18:52:38.153303    7604 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\proxy-client.key
	I0229 18:52:38.153303    7604 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\proxy-client.crt with IP's: []
	I0229 18:52:38.227406    7604 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\proxy-client.crt ...
	I0229 18:52:38.227406    7604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\proxy-client.crt: {Name:mk3947204016370c37c0e0dd1702c7385bd3a918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:52:38.228138    7604 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\proxy-client.key ...
	I0229 18:52:38.228138    7604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\proxy-client.key: {Name:mk530c8e2f1dbc8ffe61b1dd681092d2b74d1201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:52:38.241048    7604 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem (1338 bytes)
	W0229 18:52:38.242033    7604 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660_empty.pem, impossibly tiny 0 bytes
	I0229 18:52:38.242033    7604 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0229 18:52:38.242033    7604 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0229 18:52:38.242033    7604 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 18:52:38.243035    7604 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 18:52:38.243035    7604 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem (1708 bytes)
	I0229 18:52:38.245040    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:52:38.291744    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:52:38.332744    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:52:38.372755    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:52:38.420656    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:52:38.467326    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 18:52:38.515567    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:52:38.559540    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 18:52:38.602554    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /usr/share/ca-certificates/56602.pem (1708 bytes)
	I0229 18:52:38.647582    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:52:38.686158    7604 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem --> /usr/share/ca-certificates/5660.pem (1338 bytes)
	I0229 18:52:38.730410    7604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:52:38.775802    7604 ssh_runner.go:195] Run: openssl version
	I0229 18:52:38.802050    7604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56602.pem && ln -fs /usr/share/ca-certificates/56602.pem /etc/ssl/certs/56602.pem"
	I0229 18:52:38.837821    7604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56602.pem
	I0229 18:52:38.850190    7604 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:50 /usr/share/ca-certificates/56602.pem
	I0229 18:52:38.861065    7604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56602.pem
	I0229 18:52:38.891767    7604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56602.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:52:38.924991    7604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:52:38.957663    7604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:52:38.970197    7604 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:52:38.980209    7604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:52:39.007490    7604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:52:39.042981    7604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5660.pem && ln -fs /usr/share/ca-certificates/5660.pem /etc/ssl/certs/5660.pem"
	I0229 18:52:39.073011    7604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5660.pem
	I0229 18:52:39.083079    7604 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:50 /usr/share/ca-certificates/5660.pem
	I0229 18:52:39.094047    7604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5660.pem
	I0229 18:52:39.122289    7604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5660.pem /etc/ssl/certs/51391683.0"
	I0229 18:52:39.156541    7604 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:52:39.169491    7604 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:52:39.169738    7604 kubeadm.go:404] StartCluster: {Name:old-k8s-version-718400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-718400 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:52:39.177813    7604 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:52:39.230047    7604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:52:39.263012    7604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:52:39.282964    7604 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 18:52:39.292970    7604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:52:39.317035    7604 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:52:39.317135    7604 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 18:52:39.671249    7604 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:52:39.671249    7604 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0229 18:52:39.779250    7604 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0229 18:52:39.972088    7604 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:56:44.011137    7604 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:56:44.011505    7604 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:56:44.017726    7604 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:56:44.017911    7604 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:56:44.018242    7604 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:56:44.018506    7604 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:56:44.018799    7604 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:56:44.019190    7604 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:56:44.019493    7604 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:56:44.019631    7604 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:56:44.019960    7604 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:56:44.025378    7604 out.go:204]   - Generating certificates and keys ...
	I0229 18:56:44.025708    7604 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:56:44.025902    7604 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:56:44.026041    7604 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:56:44.026234    7604 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:56:44.026534    7604 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:56:44.026534    7604 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:56:44.026534    7604 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:56:44.026534    7604 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-718400 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0229 18:56:44.027078    7604 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:56:44.027290    7604 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-718400 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0229 18:56:44.027290    7604 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:56:44.027290    7604 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:56:44.027842    7604 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:56:44.028045    7604 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:56:44.028167    7604 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:56:44.028228    7604 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:56:44.028228    7604 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:56:44.028228    7604 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:56:44.028841    7604 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:56:44.032196    7604 out.go:204]   - Booting up control plane ...
	I0229 18:56:44.032196    7604 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:56:44.032196    7604 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:56:44.032196    7604 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:56:44.032196    7604 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:56:44.033199    7604 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:56:44.033618    7604 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:56:44.033618    7604 kubeadm.go:322] 
	I0229 18:56:44.033860    7604 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:56:44.034031    7604 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:56:44.034141    7604 kubeadm.go:322] 
	I0229 18:56:44.034347    7604 kubeadm.go:322] This error is likely caused by:
	I0229 18:56:44.034433    7604 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:56:44.034753    7604 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:56:44.034753    7604 kubeadm.go:322] 
	I0229 18:56:44.035276    7604 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:56:44.035276    7604 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:56:44.035276    7604 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:56:44.035276    7604 kubeadm.go:322] 
	I0229 18:56:44.035276    7604 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:56:44.035276    7604 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:56:44.036286    7604 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:56:44.036351    7604 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:56:44.036598    7604 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:56:44.036789    7604 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 18:56:44.036994    7604 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-718400 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-718400 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-718400 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-718400 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:56:44.037240    7604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:56:45.673110    7604 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (1.6358142s)
	I0229 18:56:45.685099    7604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:56:45.709428    7604 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 18:56:45.725285    7604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:56:45.744284    7604 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:56:45.744284    7604 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 18:56:46.164368    7604 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:56:46.164368    7604 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0229 18:56:46.279403    7604 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0229 18:56:46.464469    7604 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:00:47.652366    7604 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:00:47.653175    7604 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:00:47.658861    7604 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:00:47.659121    7604 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:00:47.659291    7604 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:00:47.659561    7604 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:00:47.659792    7604 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:00:47.659792    7604 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:00:47.659792    7604 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:00:47.659792    7604 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:00:47.659792    7604 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:00:47.663238    7604 out.go:204]   - Generating certificates and keys ...
	I0229 19:00:47.663578    7604 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:00:47.663617    7604 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:00:47.663617    7604 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:00:47.664245    7604 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:00:47.664325    7604 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:00:47.664325    7604 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:00:47.664325    7604 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:00:47.664856    7604 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:00:47.664989    7604 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:00:47.664989    7604 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:00:47.664989    7604 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:00:47.664989    7604 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:00:47.665521    7604 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:00:47.665787    7604 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:00:47.666019    7604 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:00:47.666328    7604 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:00:47.666655    7604 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:00:47.669067    7604 out.go:204]   - Booting up control plane ...
	I0229 19:00:47.669572    7604 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:00:47.669843    7604 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:00:47.670394    7604 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:00:47.670867    7604 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:00:47.671302    7604 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:00:47.671347    7604 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:00:47.671498    7604 kubeadm.go:322] 
	I0229 19:00:47.671662    7604 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:00:47.671870    7604 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:00:47.671870    7604 kubeadm.go:322] 
	I0229 19:00:47.671970    7604 kubeadm.go:322] This error is likely caused by:
	I0229 19:00:47.672054    7604 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:00:47.672153    7604 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:00:47.672153    7604 kubeadm.go:322] 
	I0229 19:00:47.672153    7604 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:00:47.672153    7604 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:00:47.672153    7604 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:00:47.672153    7604 kubeadm.go:322] 
	I0229 19:00:47.672153    7604 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:00:47.673248    7604 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:00:47.673336    7604 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:00:47.673336    7604 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:00:47.673336    7604 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:00:47.673336    7604 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 19:00:47.673336    7604 kubeadm.go:406] StartCluster complete in 8m8.4996411s
	I0229 19:00:47.684291    7604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:00:47.745253    7604 logs.go:276] 0 containers: []
	W0229 19:00:47.745370    7604 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:00:47.761657    7604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:00:47.809571    7604 logs.go:276] 0 containers: []
	W0229 19:00:47.809633    7604 logs.go:278] No container was found matching "etcd"
	I0229 19:00:47.829748    7604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:00:47.897005    7604 logs.go:276] 0 containers: []
	W0229 19:00:47.897005    7604 logs.go:278] No container was found matching "coredns"
	I0229 19:00:47.913782    7604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:00:47.999859    7604 logs.go:276] 0 containers: []
	W0229 19:00:47.999859    7604 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:00:48.019040    7604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:00:48.088392    7604 logs.go:276] 0 containers: []
	W0229 19:00:48.088584    7604 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:00:48.106480    7604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:00:48.180482    7604 logs.go:276] 0 containers: []
	W0229 19:00:48.180482    7604 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:00:48.191486    7604 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:00:48.252926    7604 logs.go:276] 0 containers: []
	W0229 19:00:48.252978    7604 logs.go:278] No container was found matching "kindnet"
	I0229 19:00:48.253029    7604 logs.go:123] Gathering logs for dmesg ...
	I0229 19:00:48.253077    7604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:00:48.282065    7604 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:00:48.282065    7604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:00:48.435250    7604 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:00:48.435250    7604 logs.go:123] Gathering logs for Docker ...
	I0229 19:00:48.435250    7604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:00:48.489167    7604 logs.go:123] Gathering logs for container status ...
	I0229 19:00:48.489222    7604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:00:48.594575    7604 logs.go:123] Gathering logs for kubelet ...
	I0229 19:00:48.594675    7604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:00:48.652604    7604 logs.go:138] Found kubelet problem: Feb 29 19:00:25 old-k8s-version-718400 kubelet[5946]: E0229 19:00:25.884918    5946 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:00:48.661282    7604 logs.go:138] Found kubelet problem: Feb 29 19:00:28 old-k8s-version-718400 kubelet[5946]: E0229 19:00:28.885077    5946 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:00:48.662443    7604 logs.go:138] Found kubelet problem: Feb 29 19:00:28 old-k8s-version-718400 kubelet[5946]: E0229 19:00:28.886524    5946 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:00:48.666109    7604 logs.go:138] Found kubelet problem: Feb 29 19:00:29 old-k8s-version-718400 kubelet[5946]: E0229 19:00:29.890807    5946 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:00:48.694502    7604 logs.go:138] Found kubelet problem: Feb 29 19:00:39 old-k8s-version-718400 kubelet[5946]: E0229 19:00:39.885644    5946 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:00:48.704301    7604 logs.go:138] Found kubelet problem: Feb 29 19:00:42 old-k8s-version-718400 kubelet[5946]: E0229 19:00:42.884702    5946 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:00:48.705560    7604 logs.go:138] Found kubelet problem: Feb 29 19:00:42 old-k8s-version-718400 kubelet[5946]: E0229 19:00:42.886188    5946 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:00:48.709440    7604 logs.go:138] Found kubelet problem: Feb 29 19:00:43 old-k8s-version-718400 kubelet[5946]: E0229 19:00:43.884491    5946 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:00:48.726021    7604 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 19:00:48.726021    7604 out.go:239] * 
	* 
	W0229 19:00:48.726711    7604 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:00:48.726711    7604 out.go:239] * 
	* 
	W0229 19:00:48.729030    7604 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 19:00:48.732209    7604 out.go:177] X Problems detected in kubelet:
	I0229 19:00:48.737964    7604 out.go:177]   Feb 29 19:00:25 old-k8s-version-718400 kubelet[5946]: E0229 19:00:25.884918    5946 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:00:48.743593    7604 out.go:177]   Feb 29 19:00:28 old-k8s-version-718400 kubelet[5946]: E0229 19:00:28.885077    5946 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0229 19:00:48.748465    7604 out.go:177]   Feb 29 19:00:28 old-k8s-version-718400 kubelet[5946]: E0229 19:00:28.886524    5946 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:00:48.754667    7604 out.go:177] 
	W0229 19:00:48.756777    7604 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:00:48.756777    7604 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 19:00:48.756777    7604 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 19:00:48.762836    7604 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p old-k8s-version-718400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-718400
helpers_test.go:235: (dbg) docker inspect old-k8s-version-718400:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95",
	        "Created": "2024-02-29T18:51:53.042271456Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 211476,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T18:51:53.747450548Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/hostname",
	        "HostsPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/hosts",
	        "LogPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95-json.log",
	        "Name": "/old-k8s-version-718400",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-718400:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-718400",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e-init/diff:/var/lib/docker/overlay2/93b520212bad25395214c0a2a80384ead8baa0a1e04ab69f20509c9ef347fcc7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-718400",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-718400/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-718400",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-718400",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-718400",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "143b201fd04f8e3ec5614438270c0a72d0f9c54321ed722b9ca32e9a5a58601e",
	            "SandboxKey": "/var/run/docker/netns/143b201fd04f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59625"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59626"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59627"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59623"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59624"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-718400": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "12e46b2d6b8f",
	                        "old-k8s-version-718400"
	                    ],
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "NetworkID": "b75bbe82b7c3705bcc35a14b3795bdbd848e1be9ef602ed5c81af9b5c594adc5",
	                    "EndpointID": "b540942eee5758f379670873f86ccd0e0dbbef97ce41378c8a1817b749ff37e2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-718400",
	                        "12e46b2d6b8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400: exit status 6 (2.2185106s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:00:50.046699    7760 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 19:00:51.919600    7760 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-718400" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-718400" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (563.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (4.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-718400 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-718400 create -f testdata\busybox.yaml: exit status 1 (205.9148ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-718400" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-718400 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-718400
helpers_test.go:235: (dbg) docker inspect old-k8s-version-718400:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95",
	        "Created": "2024-02-29T18:51:53.042271456Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 211476,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T18:51:53.747450548Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/hostname",
	        "HostsPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/hosts",
	        "LogPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95-json.log",
	        "Name": "/old-k8s-version-718400",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-718400:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-718400",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e-init/diff:/var/lib/docker/overlay2/93b520212bad25395214c0a2a80384ead8baa0a1e04ab69f20509c9ef347fcc7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-718400",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-718400/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-718400",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-718400",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-718400",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "143b201fd04f8e3ec5614438270c0a72d0f9c54321ed722b9ca32e9a5a58601e",
	            "SandboxKey": "/var/run/docker/netns/143b201fd04f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59625"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59626"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59627"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59623"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59624"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-718400": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "12e46b2d6b8f",
	                        "old-k8s-version-718400"
	                    ],
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "NetworkID": "b75bbe82b7c3705bcc35a14b3795bdbd848e1be9ef602ed5c81af9b5c594adc5",
	                    "EndpointID": "b540942eee5758f379670873f86ccd0e0dbbef97ce41378c8a1817b749ff37e2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-718400",
	                        "12e46b2d6b8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400: exit status 6 (2.0484847s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:00:52.783599   10968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 19:00:54.524923   10968 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-718400" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-718400" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-718400
helpers_test.go:235: (dbg) docker inspect old-k8s-version-718400:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95",
	        "Created": "2024-02-29T18:51:53.042271456Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 211476,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T18:51:53.747450548Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/hostname",
	        "HostsPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/hosts",
	        "LogPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95-json.log",
	        "Name": "/old-k8s-version-718400",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-718400:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-718400",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e-init/diff:/var/lib/docker/overlay2/93b520212bad25395214c0a2a80384ead8baa0a1e04ab69f20509c9ef347fcc7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-718400",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-718400/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-718400",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-718400",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-718400",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "143b201fd04f8e3ec5614438270c0a72d0f9c54321ed722b9ca32e9a5a58601e",
	            "SandboxKey": "/var/run/docker/netns/143b201fd04f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59625"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59626"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59627"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59623"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59624"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-718400": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "12e46b2d6b8f",
	                        "old-k8s-version-718400"
	                    ],
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "NetworkID": "b75bbe82b7c3705bcc35a14b3795bdbd848e1be9ef602ed5c81af9b5c594adc5",
	                    "EndpointID": "b540942eee5758f379670873f86ccd0e0dbbef97ce41378c8a1817b749ff37e2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-718400",
	                        "12e46b2d6b8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400: exit status 6 (1.6353865s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:00:55.111968    4392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 19:00:56.461831    4392 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-718400" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-718400" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (4.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-718400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-718400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m42.7855012s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:00:56.684053    8696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_addons_439019a81e20ad064ebb72ced3e20f3355766968_5.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-718400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-718400 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-718400 describe deploy/metrics-server -n kube-system: exit status 1 (179.5426ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-718400" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-718400 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-718400
helpers_test.go:235: (dbg) docker inspect old-k8s-version-718400:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95",
	        "Created": "2024-02-29T18:51:53.042271456Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 211476,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T18:51:53.747450548Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/hostname",
	        "HostsPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/hosts",
	        "LogPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95-json.log",
	        "Name": "/old-k8s-version-718400",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-718400:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-718400",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e-init/diff:/var/lib/docker/overlay2/93b520212bad25395214c0a2a80384ead8baa0a1e04ab69f20509c9ef347fcc7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-718400",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-718400/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-718400",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-718400",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-718400",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "143b201fd04f8e3ec5614438270c0a72d0f9c54321ed722b9ca32e9a5a58601e",
	            "SandboxKey": "/var/run/docker/netns/143b201fd04f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59625"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59626"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59627"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59623"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59624"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-718400": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "12e46b2d6b8f",
	                        "old-k8s-version-718400"
	                    ],
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "NetworkID": "b75bbe82b7c3705bcc35a14b3795bdbd848e1be9ef602ed5c81af9b5c594adc5",
	                    "EndpointID": "b540942eee5758f379670873f86ccd0e0dbbef97ce41378c8a1817b749ff37e2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-718400",
	                        "12e46b2d6b8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400: exit status 6 (1.7737394s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:02:39.943124    9808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 19:02:41.454005    9808 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-718400" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-718400" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (804.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-718400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-718400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: exit status 109 (13m18.698821s)

                                                
                                                
-- stdout --
	* [old-k8s-version-718400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-718400 in cluster old-k8s-version-718400
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Restarting existing docker container for "old-k8s-version-718400" ...
	* Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 29 19:15:44 old-k8s-version-718400 kubelet[11316]: E0229 19:15:44.334761   11316 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:15:47 old-k8s-version-718400 kubelet[11316]: E0229 19:15:47.331645   11316 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:15:52 old-k8s-version-718400 kubelet[11316]: E0229 19:15:52.364501   11316 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:02:47.544797    3012 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 19:02:47.632787    3012 out.go:291] Setting OutFile to fd 744 ...
	I0229 19:02:47.633769    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:02:47.633821    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:02:47.633870    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:02:47.664958    3012 out.go:298] Setting JSON to false
	I0229 19:02:47.668872    3012 start.go:129] hostinfo: {"hostname":"minikube7","uptime":11327,"bootTime":1709222039,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 19:02:47.668872    3012 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 19:02:47.671353    3012 out.go:177] * [old-k8s-version-718400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 19:02:47.678791    3012 notify.go:220] Checking for updates...
	I0229 19:02:47.681802    3012 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 19:02:47.684783    3012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:02:47.687425    3012 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 19:02:47.690107    3012 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:02:47.692782    3012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:02:47.695946    3012 config.go:182] Loaded profile config "old-k8s-version-718400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 19:02:47.700319    3012 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 19:02:47.702995    3012 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:02:48.049435    3012 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 19:02:48.061494    3012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 19:02:48.546049    3012 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:93 SystemTime:2024-02-29 19:02:48.493411849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 19:02:48.549443    3012 out.go:177] * Using the docker driver based on existing profile
	I0229 19:02:48.552757    3012 start.go:299] selected driver: docker
	I0229 19:02:48.552808    3012 start.go:903] validating driver "docker" against &{Name:old-k8s-version-718400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-718400 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:02:48.553071    3012 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:02:48.631554    3012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 19:02:49.053371    3012 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:true NGoroutines:93 SystemTime:2024-02-29 19:02:49.009923317 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 19:02:49.053521    3012 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 19:02:49.053521    3012 cni.go:84] Creating CNI manager for ""
	I0229 19:02:49.053521    3012 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 19:02:49.053521    3012 start_flags.go:323] config:
	{Name:old-k8s-version-718400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-718400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:02:49.058766    3012 out.go:177] * Starting control plane node old-k8s-version-718400 in cluster old-k8s-version-718400
	I0229 19:02:49.061608    3012 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 19:02:49.065903    3012 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 19:02:49.068146    3012 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 19:02:49.068212    3012 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 19:02:49.068274    3012 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 19:02:49.068274    3012 cache.go:56] Caching tarball of preloaded images
	I0229 19:02:49.069022    3012 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 19:02:49.069022    3012 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 19:02:49.069022    3012 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\config.json ...
	I0229 19:02:49.267064    3012 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 19:02:49.267248    3012 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 19:02:49.267321    3012 cache.go:194] Successfully downloaded all kic artifacts
	I0229 19:02:49.267360    3012 start.go:365] acquiring machines lock for old-k8s-version-718400: {Name:mkb837b5f41f1d87e763b06f510aab2257e4f19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:02:49.267568    3012 start.go:369] acquired machines lock for "old-k8s-version-718400" in 138.6µs
	I0229 19:02:49.267801    3012 start.go:96] Skipping create...Using existing machine configuration
	I0229 19:02:49.267843    3012 fix.go:54] fixHost starting: 
	I0229 19:02:49.287589    3012 cli_runner.go:164] Run: docker container inspect old-k8s-version-718400 --format={{.State.Status}}
	I0229 19:02:49.491338    3012 fix.go:102] recreateIfNeeded on old-k8s-version-718400: state=Stopped err=<nil>
	W0229 19:02:49.491338    3012 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 19:02:49.495354    3012 out.go:177] * Restarting existing docker container for "old-k8s-version-718400" ...
	I0229 19:02:49.513189    3012 cli_runner.go:164] Run: docker start old-k8s-version-718400
	I0229 19:02:50.339428    3012 cli_runner.go:164] Run: docker container inspect old-k8s-version-718400 --format={{.State.Status}}
	I0229 19:02:50.601325    3012 kic.go:430] container "old-k8s-version-718400" state is running.
	I0229 19:02:50.626395    3012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-718400
	I0229 19:02:50.851948    3012 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\config.json ...
	I0229 19:02:50.855205    3012 machine.go:88] provisioning docker machine ...
	I0229 19:02:50.855315    3012 ubuntu.go:169] provisioning hostname "old-k8s-version-718400"
	I0229 19:02:50.866195    3012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 19:02:51.121495    3012 main.go:141] libmachine: Using SSH client type: native
	I0229 19:02:51.122625    3012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 60232 <nil> <nil>}
	I0229 19:02:51.122625    3012 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-718400 && echo "old-k8s-version-718400" | sudo tee /etc/hostname
	I0229 19:02:51.127912    3012 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0229 19:02:54.358978    3012 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-718400
	
	I0229 19:02:54.375353    3012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 19:02:54.592789    3012 main.go:141] libmachine: Using SSH client type: native
	I0229 19:02:54.593341    3012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 60232 <nil> <nil>}
	I0229 19:02:54.593424    3012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-718400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-718400/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-718400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 19:02:54.789894    3012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:02:54.789894    3012 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0229 19:02:54.789894    3012 ubuntu.go:177] setting up certificates
	I0229 19:02:54.790433    3012 provision.go:83] configureAuth start
	I0229 19:02:54.802015    3012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-718400
	I0229 19:02:55.001777    3012 provision.go:138] copyHostCerts
	I0229 19:02:55.002335    3012 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0229 19:02:55.002335    3012 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0229 19:02:55.002409    3012 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0229 19:02:55.004529    3012 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0229 19:02:55.004529    3012 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0229 19:02:55.005036    3012 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 19:02:55.006622    3012 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0229 19:02:55.006675    3012 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0229 19:02:55.007127    3012 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 19:02:55.008534    3012 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-718400 san=[192.168.103.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-718400]
	I0229 19:02:55.282702    3012 provision.go:172] copyRemoteCerts
	I0229 19:02:55.290218    3012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 19:02:55.306068    3012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 19:02:55.515368    3012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60232 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa Username:docker}
	I0229 19:02:55.654789    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 19:02:55.702957    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 19:02:55.750642    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 19:02:55.799000    3012 provision.go:86] duration metric: configureAuth took 1.0085583s
	I0229 19:02:55.799059    3012 ubuntu.go:193] setting minikube options for container-runtime
	I0229 19:02:55.799769    3012 config.go:182] Loaded profile config "old-k8s-version-718400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 19:02:55.811651    3012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 19:02:56.035948    3012 main.go:141] libmachine: Using SSH client type: native
	I0229 19:02:56.035948    3012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 60232 <nil> <nil>}
	I0229 19:02:56.035948    3012 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 19:02:56.239725    3012 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0229 19:02:56.239725    3012 ubuntu.go:71] root file system type: overlay
	I0229 19:02:56.239725    3012 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 19:02:56.252071    3012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 19:02:56.470434    3012 main.go:141] libmachine: Using SSH client type: native
	I0229 19:02:56.470549    3012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 60232 <nil> <nil>}
	I0229 19:02:56.470549    3012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 19:02:56.696791    3012 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 19:02:56.710189    3012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 19:02:56.915719    3012 main.go:141] libmachine: Using SSH client type: native
	I0229 19:02:56.915719    3012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 60232 <nil> <nil>}
	I0229 19:02:56.915719    3012 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 19:02:57.146950    3012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:02:57.147647    3012 machine.go:91] provisioned docker machine in 6.2923411s
	I0229 19:02:57.147647    3012 start.go:300] post-start starting for "old-k8s-version-718400" (driver="docker")
	I0229 19:02:57.147647    3012 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 19:02:57.164225    3012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 19:02:57.172578    3012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 19:02:57.381085    3012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60232 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa Username:docker}
	I0229 19:02:57.575585    3012 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 19:02:57.585493    3012 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 19:02:57.585493    3012 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 19:02:57.585493    3012 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 19:02:57.585493    3012 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 19:02:57.585493    3012 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0229 19:02:57.586325    3012 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0229 19:02:57.587398    3012 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem -> 56602.pem in /etc/ssl/certs
	I0229 19:02:57.601695    3012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 19:02:57.626455    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /etc/ssl/certs/56602.pem (1708 bytes)
	I0229 19:02:57.681501    3012 start.go:303] post-start completed in 533.8498ms
	I0229 19:02:57.698740    3012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 19:02:57.715925    3012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 19:02:57.934967    3012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60232 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa Username:docker}
	I0229 19:02:58.139516    3012 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 19:02:58.158264    3012 fix.go:56] fixHost completed within 8.8903479s
	I0229 19:02:58.158264    3012 start.go:83] releasing machines lock for "old-k8s-version-718400", held for 8.8904997s
	I0229 19:02:58.170701    3012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-718400
	I0229 19:02:58.373757    3012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 19:02:58.388457    3012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 19:02:58.391876    3012 ssh_runner.go:195] Run: cat /version.json
	I0229 19:02:58.402267    3012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 19:02:58.619281    3012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60232 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa Username:docker}
	I0229 19:02:58.636858    3012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60232 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-718400\id_rsa Username:docker}
	I0229 19:02:58.973355    3012 ssh_runner.go:195] Run: systemctl --version
	I0229 19:02:59.009526    3012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 19:02:59.027717    3012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 19:02:59.044830    3012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 19:02:59.088299    3012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 19:02:59.110983    3012 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0229 19:02:59.110983    3012 start.go:475] detecting cgroup driver to use...
	I0229 19:02:59.110983    3012 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 19:02:59.112302    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:02:59.165038    3012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 19:02:59.208752    3012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 19:02:59.229816    3012 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 19:02:59.240875    3012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 19:02:59.277081    3012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:02:59.315029    3012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 19:02:59.351057    3012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:02:59.402744    3012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 19:02:59.472366    3012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 19:02:59.519671    3012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 19:02:59.579622    3012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 19:02:59.632042    3012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:02:59.922129    3012 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 19:03:00.235277    3012 start.go:475] detecting cgroup driver to use...
	I0229 19:03:00.235383    3012 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 19:03:00.260234    3012 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 19:03:00.310012    3012 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0229 19:03:00.337104    3012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 19:03:00.386678    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:03:00.476907    3012 ssh_runner.go:195] Run: which cri-dockerd
	I0229 19:03:00.529566    3012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 19:03:00.561096    3012 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 19:03:00.655611    3012 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 19:03:00.983186    3012 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 19:03:01.319894    3012 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 19:03:01.319950    3012 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 19:03:01.405606    3012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:03:01.708224    3012 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 19:03:02.918572    3012 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.2103374s)
	I0229 19:03:02.934565    3012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:03:03.035579    3012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:03:03.109199    3012 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 25.0.3 ...
	I0229 19:03:03.125205    3012 cli_runner.go:164] Run: docker exec -t old-k8s-version-718400 dig +short host.docker.internal
	I0229 19:03:03.504205    3012 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0229 19:03:03.528211    3012 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0229 19:03:03.541207    3012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:03:03.586194    3012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 19:03:03.838186    3012 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 19:03:03.850186    3012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 19:03:03.905198    3012 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 19:03:03.905198    3012 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 19:03:03.928210    3012 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 19:03:03.971210    3012 ssh_runner.go:195] Run: which lz4
	I0229 19:03:04.012199    3012 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 19:03:04.028187    3012 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 19:03:04.029209    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0229 19:03:19.110477    3012 docker.go:649] Took 15.125146 seconds to copy over tarball
	I0229 19:03:19.128459    3012 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 19:03:23.445542    3012 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.3170476s)
	I0229 19:03:23.445646    3012 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 19:03:23.544191    3012 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 19:03:23.564908    3012 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0229 19:03:23.614320    3012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:03:23.783367    3012 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 19:03:31.047071    3012 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.2636438s)
	I0229 19:03:31.063027    3012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 19:03:31.115545    3012 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 19:03:31.115545    3012 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 19:03:31.115545    3012 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 19:03:31.130204    3012 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 19:03:31.131190    3012 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 19:03:31.142186    3012 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 19:03:31.143187    3012 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:03:31.146201    3012 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 19:03:31.148184    3012 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 19:03:31.148184    3012 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 19:03:31.149210    3012 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 19:03:31.149210    3012 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 19:03:31.149210    3012 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 19:03:31.155200    3012 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 19:03:31.157207    3012 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:03:31.159181    3012 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 19:03:31.160197    3012 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 19:03:31.161202    3012 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 19:03:31.169185    3012 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	W0229 19:03:31.278660    3012 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 19:03:31.373710    3012 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 19:03:31.483883    3012 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 19:03:31.595955    3012 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 19:03:31.677169    3012 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	W0229 19:03:31.705100    3012 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 19:03:31.724086    3012 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 19:03:31.732094    3012 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 19:03:31.732094    3012 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0229 19:03:31.732094    3012 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 19:03:31.732094    3012 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 19:03:31.746082    3012 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 19:03:31.780092    3012 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 19:03:31.780092    3012 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0229 19:03:31.780092    3012 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 19:03:31.795085    3012 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 19:03:31.800142    3012 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 19:03:31.800142    3012 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0229 19:03:31.800142    3012 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 19:03:31.817102    3012 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0229 19:03:31.822089    3012 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	W0229 19:03:31.828856    3012 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 19:03:31.849830    3012 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0229 19:03:31.872846    3012 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0229 19:03:31.902834    3012 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:03:31.906825    3012 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	W0229 19:03:31.936880    3012 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 19:03:31.959836    3012 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 19:03:31.959836    3012 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0229 19:03:31.959836    3012 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0229 19:03:31.976831    3012 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0229 19:03:32.021839    3012 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 19:03:32.023840    3012 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	W0229 19:03:32.046834    3012 image.go:187] authn lookup for registry.k8s.io/etcd:3.3.15-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 19:03:32.078639    3012 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 19:03:32.078639    3012 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.16.0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0229 19:03:32.078639    3012 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 19:03:32.093688    3012 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 19:03:32.139489    3012 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0229 19:03:32.157490    3012 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 19:03:32.198486    3012 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 19:03:32.198486    3012 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0229 19:03:32.198486    3012 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 19:03:32.207489    3012 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0229 19:03:32.244509    3012 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0229 19:03:32.313557    3012 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 19:03:32.362302    3012 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 19:03:32.362302    3012 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.3.15-0 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0229 19:03:32.362857    3012 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 19:03:32.376832    3012 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0229 19:03:32.423043    3012 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0229 19:03:32.423043    3012 cache_images.go:92] LoadImages completed in 1.3074882s
	W0229 19:03:32.423990    3012 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0: The system cannot find the file specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0: The system cannot find the file specified.
	I0229 19:03:32.432997    3012 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 19:03:32.554881    3012 cni.go:84] Creating CNI manager for ""
	I0229 19:03:32.555131    3012 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 19:03:32.555131    3012 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 19:03:32.555204    3012 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-718400 NodeName:old-k8s-version-718400 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 19:03:32.555528    3012 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-718400"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-718400
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.103.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 19:03:32.555716    3012 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-718400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-718400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 19:03:32.571650    3012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 19:03:32.597859    3012 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 19:03:32.615618    3012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 19:03:32.637588    3012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
	I0229 19:03:32.676684    3012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 19:03:32.709100    3012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I0229 19:03:32.755304    3012 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0229 19:03:32.765302    3012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:03:32.786552    3012 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400 for IP: 192.168.103.2
	I0229 19:03:32.787578    3012 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:03:32.787578    3012 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0229 19:03:32.787578    3012 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0229 19:03:32.787578    3012 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\client.key
	I0229 19:03:32.789223    3012 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.key.33fce0b9
	I0229 19:03:32.789543    3012 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\proxy-client.key
	I0229 19:03:32.791359    3012 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem (1338 bytes)
	W0229 19:03:32.791689    3012 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660_empty.pem, impossibly tiny 0 bytes
	I0229 19:03:32.791743    3012 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0229 19:03:32.792073    3012 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0229 19:03:32.792403    3012 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 19:03:32.792787    3012 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 19:03:32.793292    3012 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem (1708 bytes)
	I0229 19:03:32.795745    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 19:03:32.859096    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 19:03:32.921182    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 19:03:32.973072    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-718400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 19:03:33.014588    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 19:03:33.062449    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 19:03:33.114595    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 19:03:33.159622    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 19:03:33.199610    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /usr/share/ca-certificates/56602.pem (1708 bytes)
	I0229 19:03:33.247971    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 19:03:33.298324    3012 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem --> /usr/share/ca-certificates/5660.pem (1338 bytes)
	I0229 19:03:33.356977    3012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 19:03:33.400966    3012 ssh_runner.go:195] Run: openssl version
	I0229 19:03:33.431383    3012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56602.pem && ln -fs /usr/share/ca-certificates/56602.pem /etc/ssl/certs/56602.pem"
	I0229 19:03:33.469122    3012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56602.pem
	I0229 19:03:33.492968    3012 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:50 /usr/share/ca-certificates/56602.pem
	I0229 19:03:33.511879    3012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56602.pem
	I0229 19:03:33.539615    3012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56602.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 19:03:33.579816    3012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 19:03:33.633154    3012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:03:33.645149    3012 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:03:33.661146    3012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:03:33.691146    3012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 19:03:33.728151    3012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5660.pem && ln -fs /usr/share/ca-certificates/5660.pem /etc/ssl/certs/5660.pem"
	I0229 19:03:33.759147    3012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5660.pem
	I0229 19:03:33.771142    3012 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:50 /usr/share/ca-certificates/5660.pem
	I0229 19:03:33.784160    3012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5660.pem
	I0229 19:03:33.812151    3012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5660.pem /etc/ssl/certs/51391683.0"
	I0229 19:03:33.841174    3012 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 19:03:33.860147    3012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 19:03:33.893451    3012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 19:03:33.930408    3012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 19:03:33.957410    3012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 19:03:33.984398    3012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 19:03:34.020608    3012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 19:03:34.033598    3012 kubeadm.go:404] StartCluster: {Name:old-k8s-version-718400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-718400 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:03:34.042595    3012 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 19:03:34.120304    3012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 19:03:34.144320    3012 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 19:03:34.144320    3012 kubeadm.go:636] restartCluster start
	I0229 19:03:34.158135    3012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 19:03:34.174144    3012 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:34.182128    3012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-718400
	I0229 19:03:34.369725    3012 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-718400" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 19:03:34.370868    3012 kubeconfig.go:146] "old-k8s-version-718400" context is missing from C:\Users\jenkins.minikube7\minikube-integration\kubeconfig - will repair!
	I0229 19:03:34.372225    3012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:03:34.403385    3012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 19:03:34.432553    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:34.445547    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:34.464396    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:34.946885    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:34.965776    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:34.990132    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:35.444599    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:35.463482    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:35.492906    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:35.941448    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:35.956554    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:35.999681    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:36.437961    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:36.450955    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:36.481002    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:36.937061    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:36.955576    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:36.981404    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:37.448489    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:37.463396    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:37.490026    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:37.943390    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:37.957401    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:37.978390    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:38.440572    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:38.451812    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:38.474203    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:38.938673    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:38.954478    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:38.988683    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:39.440992    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:39.454685    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:39.481695    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:39.943453    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:39.961807    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:39.984208    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:40.444985    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:40.458511    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:40.483720    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:40.942337    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:40.958318    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:40.982319    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:41.433012    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:41.446255    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:41.477531    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:41.947821    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:41.965731    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:41.988636    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:42.435166    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:42.454311    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:42.481823    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:42.947740    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:42.962726    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:42.983723    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:43.433713    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:43.449719    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:43.473724    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:43.933769    3012 api_server.go:166] Checking apiserver status ...
	I0229 19:03:43.948389    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 19:03:44.020770    3012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:03:44.445853    3012 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 19:03:44.446028    3012 kubeadm.go:1135] stopping kube-system containers ...
	I0229 19:03:44.459636    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 19:03:44.521472    3012 docker.go:483] Stopping containers: [dba92e9fc2e8 23cc10015eea 741a328fdb54 f8d99f6a70f8]
	I0229 19:03:44.530461    3012 ssh_runner.go:195] Run: docker stop dba92e9fc2e8 23cc10015eea 741a328fdb54 f8d99f6a70f8
	I0229 19:03:44.596960    3012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 19:03:44.639948    3012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:03:44.667708    3012 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Feb 29 18:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Feb 29 18:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Feb 29 18:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Feb 29 18:56 /etc/kubernetes/scheduler.conf
	
	I0229 19:03:44.683981    3012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0229 19:03:44.732609    3012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0229 19:03:44.782872    3012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0229 19:03:44.812876    3012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0229 19:03:44.842878    3012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:03:44.858888    3012 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 19:03:44.858888    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 19:03:44.990859    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 19:03:45.838586    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 19:03:46.336362    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 19:03:46.850241    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 19:03:47.232664    3012 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:03:47.248660    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:47.757382    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:48.255801    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:48.749006    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:49.254648    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:49.761589    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:50.254275    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:50.754371    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:51.248318    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:51.756564    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:52.265197    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:52.761666    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:53.251305    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:53.747629    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:54.248479    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:54.745576    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:55.249081    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:55.749975    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:56.254220    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:56.758223    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:57.255764    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:57.760508    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:58.263421    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:58.752936    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:59.253417    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:03:59.756772    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:00.264026    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:00.761226    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:01.261800    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:01.750763    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:02.255254    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:02.761894    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:03.249398    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:03.753922    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:04.253138    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:04.756152    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:05.259874    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:05.749910    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:06.252198    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:06.755197    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:07.263429    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:07.747973    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:08.261020    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:08.758791    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:09.247817    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:09.750513    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:10.256524    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:10.765847    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:11.246395    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:11.750233    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:12.251457    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:12.761623    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:13.266018    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:13.747343    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:14.251258    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:14.749321    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:15.258660    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:15.759667    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:16.248046    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:16.753697    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:17.251488    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:17.753992    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:18.258038    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:18.753284    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:19.260123    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:19.756025    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:20.258308    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:20.761469    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:21.265226    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:21.753326    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:22.249143    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:22.747443    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:23.246823    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:23.748259    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:24.260020    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:24.758585    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:25.252979    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:25.749589    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:26.254472    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:26.757998    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:27.254688    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:27.755869    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:28.259013    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:28.749059    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:29.253332    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:29.748333    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:30.257314    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:30.759391    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:31.254501    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:31.745076    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:32.249599    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:32.753591    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:33.250786    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:33.760123    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:34.257162    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:34.754405    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:35.251310    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:35.760878    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:36.257655    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:36.759164    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:37.255194    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:37.757181    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:38.252664    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:38.761927    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:39.255315    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:39.752871    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:40.252450    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:40.760337    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:41.250974    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:41.747946    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:42.249849    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:42.749846    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:43.263472    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:43.763237    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:44.249576    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:44.760298    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:45.256324    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:45.751390    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:46.245529    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:46.752280    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:47.247420    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:04:47.296756    3012 logs.go:276] 0 containers: []
	W0229 19:04:47.296756    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:04:47.310740    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:04:47.353342    3012 logs.go:276] 0 containers: []
	W0229 19:04:47.353342    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:04:47.365818    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:04:47.403510    3012 logs.go:276] 0 containers: []
	W0229 19:04:47.403563    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:04:47.417690    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:04:47.464673    3012 logs.go:276] 0 containers: []
	W0229 19:04:47.464673    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:04:47.473679    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:04:47.540133    3012 logs.go:276] 0 containers: []
	W0229 19:04:47.540201    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:04:47.557446    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:04:47.594174    3012 logs.go:276] 0 containers: []
	W0229 19:04:47.594296    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:04:47.609255    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:04:47.653378    3012 logs.go:276] 0 containers: []
	W0229 19:04:47.653378    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:04:47.664971    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:04:47.713731    3012 logs.go:276] 0 containers: []
	W0229 19:04:47.713731    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:04:47.714271    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:04:47.714271    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:04:47.788174    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:04:47.788174    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:04:47.842665    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:28 old-k8s-version-718400 kubelet[1698]: E0229 19:04:28.962616    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:04:47.847148    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:30 old-k8s-version-718400 kubelet[1698]: E0229 19:04:30.954923    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:04:47.851140    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:32 old-k8s-version-718400 kubelet[1698]: E0229 19:04:32.961976    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:04:47.854140    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:33 old-k8s-version-718400 kubelet[1698]: E0229 19:04:33.961946    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:04:47.868148    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:40 old-k8s-version-718400 kubelet[1698]: E0229 19:04:40.973171    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:04:47.876152    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:44 old-k8s-version-718400 kubelet[1698]: E0229 19:04:44.962019    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:04:47.881363    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:46 old-k8s-version-718400 kubelet[1698]: E0229 19:04:46.961080    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:04:47.883365    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:04:47.883365    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:04:47.913169    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:04:47.913239    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:04:48.040303    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:04:48.040303    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:04:48.040303    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:04:48.073226    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:04:48.073226    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:04:48.073770    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:04:48.073835    3012 out.go:239]   Feb 29 19:04:32 old-k8s-version-718400 kubelet[1698]: E0229 19:04:32.961976    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:04:32 old-k8s-version-718400 kubelet[1698]: E0229 19:04:32.961976    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:04:48.073835    3012 out.go:239]   Feb 29 19:04:33 old-k8s-version-718400 kubelet[1698]: E0229 19:04:33.961946    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:04:33 old-k8s-version-718400 kubelet[1698]: E0229 19:04:33.961946    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:04:48.073912    3012 out.go:239]   Feb 29 19:04:40 old-k8s-version-718400 kubelet[1698]: E0229 19:04:40.973171    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:04:40 old-k8s-version-718400 kubelet[1698]: E0229 19:04:40.973171    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:04:48.073912    3012 out.go:239]   Feb 29 19:04:44 old-k8s-version-718400 kubelet[1698]: E0229 19:04:44.962019    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:04:44 old-k8s-version-718400 kubelet[1698]: E0229 19:04:44.962019    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:04:48.073969    3012 out.go:239]   Feb 29 19:04:46 old-k8s-version-718400 kubelet[1698]: E0229 19:04:46.961080    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:04:46 old-k8s-version-718400 kubelet[1698]: E0229 19:04:46.961080    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:04:48.074079    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:04:48.074125    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:04:58.097574    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:04:58.141029    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:04:58.191143    3012 logs.go:276] 0 containers: []
	W0229 19:04:58.191203    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:04:58.205843    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:04:58.248644    3012 logs.go:276] 0 containers: []
	W0229 19:04:58.248644    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:04:58.259825    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:04:58.306729    3012 logs.go:276] 0 containers: []
	W0229 19:04:58.306729    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:04:58.322341    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:04:58.360765    3012 logs.go:276] 0 containers: []
	W0229 19:04:58.360765    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:04:58.368753    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:04:58.415686    3012 logs.go:276] 0 containers: []
	W0229 19:04:58.415686    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:04:58.426273    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:04:58.472664    3012 logs.go:276] 0 containers: []
	W0229 19:04:58.472714    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:04:58.482302    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:04:58.521585    3012 logs.go:276] 0 containers: []
	W0229 19:04:58.521585    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:04:58.532535    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:04:58.571632    3012 logs.go:276] 0 containers: []
	W0229 19:04:58.571713    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:04:58.571739    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:04:58.571739    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:04:58.609488    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:04:58.609488    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:04:58.686579    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:04:58.686579    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:04:58.751477    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:40 old-k8s-version-718400 kubelet[1698]: E0229 19:04:40.973171    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:04:58.765453    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:44 old-k8s-version-718400 kubelet[1698]: E0229 19:04:44.962019    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:04:58.769461    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:46 old-k8s-version-718400 kubelet[1698]: E0229 19:04:46.961080    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:04:58.772448    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:47 old-k8s-version-718400 kubelet[1698]: E0229 19:04:47.957947    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:04:58.781441    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:51 old-k8s-version-718400 kubelet[1698]: E0229 19:04:51.959166    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:04:58.794452    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:57 old-k8s-version-718400 kubelet[1698]: E0229 19:04:57.946574    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:04:58.796450    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:04:58.796450    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:04:58.825067    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:04:58.825067    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:04:58.939639    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:04:58.939639    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:04:58.939639    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:04:58.939639    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:04:58.939639    3012 out.go:239]   Feb 29 19:04:44 old-k8s-version-718400 kubelet[1698]: E0229 19:04:44.962019    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:04:44 old-k8s-version-718400 kubelet[1698]: E0229 19:04:44.962019    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:04:58.939639    3012 out.go:239]   Feb 29 19:04:46 old-k8s-version-718400 kubelet[1698]: E0229 19:04:46.961080    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:04:46 old-k8s-version-718400 kubelet[1698]: E0229 19:04:46.961080    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:04:58.939639    3012 out.go:239]   Feb 29 19:04:47 old-k8s-version-718400 kubelet[1698]: E0229 19:04:47.957947    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:04:47 old-k8s-version-718400 kubelet[1698]: E0229 19:04:47.957947    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:04:58.939639    3012 out.go:239]   Feb 29 19:04:51 old-k8s-version-718400 kubelet[1698]: E0229 19:04:51.959166    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:04:51 old-k8s-version-718400 kubelet[1698]: E0229 19:04:51.959166    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:04:58.940653    3012 out.go:239]   Feb 29 19:04:57 old-k8s-version-718400 kubelet[1698]: E0229 19:04:57.946574    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:04:57 old-k8s-version-718400 kubelet[1698]: E0229 19:04:57.946574    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:04:58.940653    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:04:58.940653    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:05:08.970420    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:05:09.003339    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:05:09.046688    3012 logs.go:276] 0 containers: []
	W0229 19:05:09.046825    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:05:09.055438    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:05:09.097553    3012 logs.go:276] 0 containers: []
	W0229 19:05:09.097553    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:05:09.107598    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:05:09.151444    3012 logs.go:276] 0 containers: []
	W0229 19:05:09.151444    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:05:09.168536    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:05:09.208832    3012 logs.go:276] 0 containers: []
	W0229 19:05:09.208832    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:05:09.218965    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:05:09.259398    3012 logs.go:276] 0 containers: []
	W0229 19:05:09.259398    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:05:09.274184    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:05:09.312634    3012 logs.go:276] 0 containers: []
	W0229 19:05:09.312634    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:05:09.324228    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:05:09.365535    3012 logs.go:276] 0 containers: []
	W0229 19:05:09.365535    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:05:09.374396    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:05:09.417629    3012 logs.go:276] 0 containers: []
	W0229 19:05:09.417695    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:05:09.417695    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:05:09.417695    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:05:09.463153    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:46 old-k8s-version-718400 kubelet[1698]: E0229 19:04:46.961080    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:05:09.467107    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:47 old-k8s-version-718400 kubelet[1698]: E0229 19:04:47.957947    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:09.485243    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:51 old-k8s-version-718400 kubelet[1698]: E0229 19:04:51.959166    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:05:09.497243    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:57 old-k8s-version-718400 kubelet[1698]: E0229 19:04:57.946574    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:05:09.502386    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:59 old-k8s-version-718400 kubelet[1698]: E0229 19:04:59.943011    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:09.509353    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:01 old-k8s-version-718400 kubelet[1698]: E0229 19:05:01.942963    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:09.512343    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:02 old-k8s-version-718400 kubelet[1698]: E0229 19:05:02.956686    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:05:09.524894    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:08 old-k8s-version-718400 kubelet[1698]: E0229 19:05:08.944758    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:05:09.525886    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:05:09.525886    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:05:09.550245    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:05:09.550245    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:05:09.672866    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:05:09.672866    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:05:09.672866    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:05:09.706796    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:05:09.706796    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:05:09.798689    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:05:09.798689    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:05:09.798689    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:05:09.798689    3012 out.go:239]   Feb 29 19:04:57 old-k8s-version-718400 kubelet[1698]: E0229 19:04:57.946574    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:04:57 old-k8s-version-718400 kubelet[1698]: E0229 19:04:57.946574    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:05:09.798689    3012 out.go:239]   Feb 29 19:04:59 old-k8s-version-718400 kubelet[1698]: E0229 19:04:59.943011    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:04:59 old-k8s-version-718400 kubelet[1698]: E0229 19:04:59.943011    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:09.798689    3012 out.go:239]   Feb 29 19:05:01 old-k8s-version-718400 kubelet[1698]: E0229 19:05:01.942963    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:05:01 old-k8s-version-718400 kubelet[1698]: E0229 19:05:01.942963    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:09.798689    3012 out.go:239]   Feb 29 19:05:02 old-k8s-version-718400 kubelet[1698]: E0229 19:05:02.956686    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:05:02 old-k8s-version-718400 kubelet[1698]: E0229 19:05:02.956686    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:05:09.798689    3012 out.go:239]   Feb 29 19:05:08 old-k8s-version-718400 kubelet[1698]: E0229 19:05:08.944758    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:05:08 old-k8s-version-718400 kubelet[1698]: E0229 19:05:08.944758    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:05:09.798689    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:05:09.798689    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:05:19.825059    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:05:19.859023    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:05:19.900684    3012 logs.go:276] 0 containers: []
	W0229 19:05:19.900684    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:05:19.917858    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:05:19.964921    3012 logs.go:276] 0 containers: []
	W0229 19:05:19.964921    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:05:19.972914    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:05:20.008913    3012 logs.go:276] 0 containers: []
	W0229 19:05:20.008913    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:05:20.017919    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:05:20.059906    3012 logs.go:276] 0 containers: []
	W0229 19:05:20.059906    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:05:20.073092    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:05:20.122077    3012 logs.go:276] 0 containers: []
	W0229 19:05:20.122077    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:05:20.134083    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:05:20.176076    3012 logs.go:276] 0 containers: []
	W0229 19:05:20.176076    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:05:20.188057    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:05:20.228037    3012 logs.go:276] 0 containers: []
	W0229 19:05:20.228037    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:05:20.239039    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:05:20.290429    3012 logs.go:276] 0 containers: []
	W0229 19:05:20.290429    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:05:20.290429    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:05:20.290429    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:05:20.330049    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:57 old-k8s-version-718400 kubelet[1698]: E0229 19:04:57.946574    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:05:20.336049    3012 logs.go:138] Found kubelet problem: Feb 29 19:04:59 old-k8s-version-718400 kubelet[1698]: E0229 19:04:59.943011    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:20.343047    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:01 old-k8s-version-718400 kubelet[1698]: E0229 19:05:01.942963    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:20.347050    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:02 old-k8s-version-718400 kubelet[1698]: E0229 19:05:02.956686    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:05:20.361035    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:08 old-k8s-version-718400 kubelet[1698]: E0229 19:05:08.944758    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:05:20.368041    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:11 old-k8s-version-718400 kubelet[1698]: E0229 19:05:11.943199    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:20.373045    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:14 old-k8s-version-718400 kubelet[1698]: E0229 19:05:14.008288    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:20.380042    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:16 old-k8s-version-718400 kubelet[1698]: E0229 19:05:16.953736    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:05:20.387041    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:05:20.387041    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:05:20.417100    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:05:20.417142    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:05:20.550173    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:05:20.550173    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:05:20.550421    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:05:20.596978    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:05:20.597050    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:05:20.693805    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:05:20.693805    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:05:20.693805    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:05:20.693805    3012 out.go:239]   Feb 29 19:05:02 old-k8s-version-718400 kubelet[1698]: E0229 19:05:02.956686    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:05:02 old-k8s-version-718400 kubelet[1698]: E0229 19:05:02.956686    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:05:20.693805    3012 out.go:239]   Feb 29 19:05:08 old-k8s-version-718400 kubelet[1698]: E0229 19:05:08.944758    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:05:08 old-k8s-version-718400 kubelet[1698]: E0229 19:05:08.944758    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:05:20.693805    3012 out.go:239]   Feb 29 19:05:11 old-k8s-version-718400 kubelet[1698]: E0229 19:05:11.943199    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:05:11 old-k8s-version-718400 kubelet[1698]: E0229 19:05:11.943199    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:20.693805    3012 out.go:239]   Feb 29 19:05:14 old-k8s-version-718400 kubelet[1698]: E0229 19:05:14.008288    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:05:14 old-k8s-version-718400 kubelet[1698]: E0229 19:05:14.008288    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:20.693805    3012 out.go:239]   Feb 29 19:05:16 old-k8s-version-718400 kubelet[1698]: E0229 19:05:16.953736    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:05:16 old-k8s-version-718400 kubelet[1698]: E0229 19:05:16.953736    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:05:20.693805    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:05:20.693805    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:05:30.728140    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:05:30.758756    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:05:30.797615    3012 logs.go:276] 0 containers: []
	W0229 19:05:30.797615    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:05:30.806605    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:05:30.842609    3012 logs.go:276] 0 containers: []
	W0229 19:05:30.842609    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:05:30.851604    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:05:30.888287    3012 logs.go:276] 0 containers: []
	W0229 19:05:30.888368    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:05:30.897370    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:05:30.938208    3012 logs.go:276] 0 containers: []
	W0229 19:05:30.938208    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:05:30.946217    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:05:30.987205    3012 logs.go:276] 0 containers: []
	W0229 19:05:30.987205    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:05:30.997351    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:05:31.035583    3012 logs.go:276] 0 containers: []
	W0229 19:05:31.035583    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:05:31.044251    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:05:31.084310    3012 logs.go:276] 0 containers: []
	W0229 19:05:31.084310    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:05:31.093308    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:05:31.145736    3012 logs.go:276] 0 containers: []
	W0229 19:05:31.145736    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:05:31.145736    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:05:31.145736    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:05:31.184033    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:08 old-k8s-version-718400 kubelet[1698]: E0229 19:05:08.944758    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:05:31.192037    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:11 old-k8s-version-718400 kubelet[1698]: E0229 19:05:11.943199    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:31.197072    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:14 old-k8s-version-718400 kubelet[1698]: E0229 19:05:14.008288    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:31.204044    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:16 old-k8s-version-718400 kubelet[1698]: E0229 19:05:16.953736    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:05:31.218045    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:22 old-k8s-version-718400 kubelet[1698]: E0229 19:05:22.942546    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:31.221040    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:23 old-k8s-version-718400 kubelet[1698]: E0229 19:05:23.951856    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:05:31.225079    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:25 old-k8s-version-718400 kubelet[1698]: E0229 19:05:25.948134    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:31.235041    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:29 old-k8s-version-718400 kubelet[1698]: E0229 19:05:29.947213    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:05:31.238077    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:05:31.238077    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:05:31.264962    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:05:31.265057    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:05:31.370201    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:05:31.370201    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:05:31.370201    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:05:31.399869    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:05:31.399869    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:05:31.485873    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:05:31.485873    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:05:31.485873    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:05:31.485873    3012 out.go:239]   Feb 29 19:05:16 old-k8s-version-718400 kubelet[1698]: E0229 19:05:16.953736    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:05:16 old-k8s-version-718400 kubelet[1698]: E0229 19:05:16.953736    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:05:31.485873    3012 out.go:239]   Feb 29 19:05:22 old-k8s-version-718400 kubelet[1698]: E0229 19:05:22.942546    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:05:22 old-k8s-version-718400 kubelet[1698]: E0229 19:05:22.942546    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:31.485873    3012 out.go:239]   Feb 29 19:05:23 old-k8s-version-718400 kubelet[1698]: E0229 19:05:23.951856    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:05:23 old-k8s-version-718400 kubelet[1698]: E0229 19:05:23.951856    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:05:31.485873    3012 out.go:239]   Feb 29 19:05:25 old-k8s-version-718400 kubelet[1698]: E0229 19:05:25.948134    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:05:25 old-k8s-version-718400 kubelet[1698]: E0229 19:05:25.948134    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:31.485873    3012 out.go:239]   Feb 29 19:05:29 old-k8s-version-718400 kubelet[1698]: E0229 19:05:29.947213    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:05:29 old-k8s-version-718400 kubelet[1698]: E0229 19:05:29.947213    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:05:31.485873    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:05:31.485873    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:05:41.508231    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:05:41.543212    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:05:41.581260    3012 logs.go:276] 0 containers: []
	W0229 19:05:41.581260    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:05:41.592911    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:05:41.643417    3012 logs.go:276] 0 containers: []
	W0229 19:05:41.643417    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:05:41.651414    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:05:41.687233    3012 logs.go:276] 0 containers: []
	W0229 19:05:41.687233    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:05:41.698222    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:05:41.740508    3012 logs.go:276] 0 containers: []
	W0229 19:05:41.740587    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:05:41.752356    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:05:41.784900    3012 logs.go:276] 0 containers: []
	W0229 19:05:41.784900    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:05:41.798242    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:05:41.848913    3012 logs.go:276] 0 containers: []
	W0229 19:05:41.849911    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:05:41.857910    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:05:41.898024    3012 logs.go:276] 0 containers: []
	W0229 19:05:41.898024    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:05:41.909256    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:05:41.956853    3012 logs.go:276] 0 containers: []
	W0229 19:05:41.956853    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:05:41.956853    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:05:41.956853    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:05:42.018965    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:22 old-k8s-version-718400 kubelet[1698]: E0229 19:05:22.942546    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:42.022964    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:23 old-k8s-version-718400 kubelet[1698]: E0229 19:05:23.951856    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:05:42.029984    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:25 old-k8s-version-718400 kubelet[1698]: E0229 19:05:25.948134    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:42.039966    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:29 old-k8s-version-718400 kubelet[1698]: E0229 19:05:29.947213    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:05:42.053578    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:34 old-k8s-version-718400 kubelet[1698]: E0229 19:05:34.950123    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:42.059598    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:36 old-k8s-version-718400 kubelet[1698]: E0229 19:05:36.968953    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:42.063276    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:37 old-k8s-version-718400 kubelet[1698]: E0229 19:05:37.958868    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:05:42.071185    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:05:42.071185    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:05:42.098550    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:05:42.098653    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:05:42.216400    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:05:42.216400    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:05:42.216400    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:05:42.256355    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:05:42.256355    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:05:42.350533    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:05:42.350533    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:05:42.350533    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:05:42.350533    3012 out.go:239]   Feb 29 19:05:25 old-k8s-version-718400 kubelet[1698]: E0229 19:05:25.948134    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:05:25 old-k8s-version-718400 kubelet[1698]: E0229 19:05:25.948134    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:42.350533    3012 out.go:239]   Feb 29 19:05:29 old-k8s-version-718400 kubelet[1698]: E0229 19:05:29.947213    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:05:29 old-k8s-version-718400 kubelet[1698]: E0229 19:05:29.947213    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:05:42.350533    3012 out.go:239]   Feb 29 19:05:34 old-k8s-version-718400 kubelet[1698]: E0229 19:05:34.950123    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:05:34 old-k8s-version-718400 kubelet[1698]: E0229 19:05:34.950123    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:42.350533    3012 out.go:239]   Feb 29 19:05:36 old-k8s-version-718400 kubelet[1698]: E0229 19:05:36.968953    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:05:36 old-k8s-version-718400 kubelet[1698]: E0229 19:05:36.968953    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:42.350533    3012 out.go:239]   Feb 29 19:05:37 old-k8s-version-718400 kubelet[1698]: E0229 19:05:37.958868    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:05:37 old-k8s-version-718400 kubelet[1698]: E0229 19:05:37.958868    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:05:42.350533    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:05:42.350533    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:05:52.382812    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:05:52.420456    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:05:52.462741    3012 logs.go:276] 0 containers: []
	W0229 19:05:52.463730    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:05:52.473118    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:05:52.516997    3012 logs.go:276] 0 containers: []
	W0229 19:05:52.516997    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:05:52.529220    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:05:52.564478    3012 logs.go:276] 0 containers: []
	W0229 19:05:52.564478    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:05:52.574904    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:05:52.615268    3012 logs.go:276] 0 containers: []
	W0229 19:05:52.615268    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:05:52.624757    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:05:52.662485    3012 logs.go:276] 0 containers: []
	W0229 19:05:52.662485    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:05:52.671873    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:05:52.711553    3012 logs.go:276] 0 containers: []
	W0229 19:05:52.711553    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:05:52.723361    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:05:52.759627    3012 logs.go:276] 0 containers: []
	W0229 19:05:52.759627    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:05:52.768962    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:05:52.806593    3012 logs.go:276] 0 containers: []
	W0229 19:05:52.806734    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:05:52.806734    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:05:52.806734    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:05:52.835325    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:05:52.835325    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:05:52.934326    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:05:52.934326    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:05:52.934326    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:05:52.966523    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:05:52.966523    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:05:53.053559    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:05:53.053559    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:05:53.096464    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:29 old-k8s-version-718400 kubelet[1698]: E0229 19:05:29.947213    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:05:53.107827    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:34 old-k8s-version-718400 kubelet[1698]: E0229 19:05:34.950123    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:53.114081    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:36 old-k8s-version-718400 kubelet[1698]: E0229 19:05:36.968953    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:53.116355    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:37 old-k8s-version-718400 kubelet[1698]: E0229 19:05:37.958868    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:05:53.132357    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:44 old-k8s-version-718400 kubelet[1698]: E0229 19:05:44.948444    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:05:53.137964    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:46 old-k8s-version-718400 kubelet[1698]: E0229 19:05:46.946069    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:53.147105    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.966566    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:53.148133    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.968061    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:05:53.152884    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:05:53.152884    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:05:53.153945    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:05:53.153945    3012 out.go:239]   Feb 29 19:05:37 old-k8s-version-718400 kubelet[1698]: E0229 19:05:37.958868    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:05:37 old-k8s-version-718400 kubelet[1698]: E0229 19:05:37.958868    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:05:53.153945    3012 out.go:239]   Feb 29 19:05:44 old-k8s-version-718400 kubelet[1698]: E0229 19:05:44.948444    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:05:44 old-k8s-version-718400 kubelet[1698]: E0229 19:05:44.948444    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:05:53.153945    3012 out.go:239]   Feb 29 19:05:46 old-k8s-version-718400 kubelet[1698]: E0229 19:05:46.946069    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:05:46 old-k8s-version-718400 kubelet[1698]: E0229 19:05:46.946069    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:05:53.153945    3012 out.go:239]   Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.966566    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.966566    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:05:53.154118    3012 out.go:239]   Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.968061    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.968061    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:05:53.154118    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:05:53.154222    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:06:03.200534    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:06:03.239514    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:06:03.275518    3012 logs.go:276] 0 containers: []
	W0229 19:06:03.275518    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:06:03.284496    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:06:03.334501    3012 logs.go:276] 0 containers: []
	W0229 19:06:03.334501    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:06:03.343502    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:06:03.393975    3012 logs.go:276] 0 containers: []
	W0229 19:06:03.393975    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:06:03.413920    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:06:03.460928    3012 logs.go:276] 0 containers: []
	W0229 19:06:03.460928    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:06:03.471921    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:06:03.516948    3012 logs.go:276] 0 containers: []
	W0229 19:06:03.516948    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:06:03.530926    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:06:03.577922    3012 logs.go:276] 0 containers: []
	W0229 19:06:03.577922    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:06:03.586925    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:06:03.644508    3012 logs.go:276] 0 containers: []
	W0229 19:06:03.644587    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:06:03.653931    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:06:03.705569    3012 logs.go:276] 0 containers: []
	W0229 19:06:03.705569    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:06:03.705569    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:06:03.705569    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:06:03.748031    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:06:03.748031    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:06:03.839508    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:06:03.839508    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:06:03.887504    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:44 old-k8s-version-718400 kubelet[1698]: E0229 19:05:44.948444    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:03.895197    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:46 old-k8s-version-718400 kubelet[1698]: E0229 19:05:46.946069    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:03.912888    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.966566    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:03.913890    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.968061    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:03.931886    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:56 old-k8s-version-718400 kubelet[1698]: E0229 19:05:56.963947    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:03.939889    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:59 old-k8s-version-718400 kubelet[1698]: E0229 19:05:59.958605    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:06:03.947892    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:06:03.948885    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:06:03.983267    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:06:03.983377    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:06:04.098342    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:06:04.098342    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:06:04.098342    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:06:04.099009    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:06:04.099009    3012 out.go:239]   Feb 29 19:05:46 old-k8s-version-718400 kubelet[1698]: E0229 19:05:46.946069    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:05:46 old-k8s-version-718400 kubelet[1698]: E0229 19:05:46.946069    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:04.099009    3012 out.go:239]   Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.966566    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.966566    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:04.099009    3012 out.go:239]   Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.968061    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.968061    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:04.099009    3012 out.go:239]   Feb 29 19:05:56 old-k8s-version-718400 kubelet[1698]: E0229 19:05:56.963947    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:05:56 old-k8s-version-718400 kubelet[1698]: E0229 19:05:56.963947    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:04.099009    3012 out.go:239]   Feb 29 19:05:59 old-k8s-version-718400 kubelet[1698]: E0229 19:05:59.958605    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:05:59 old-k8s-version-718400 kubelet[1698]: E0229 19:05:59.958605    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:06:04.099009    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:06:04.099009    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:06:14.121082    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:06:14.159020    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:06:14.202530    3012 logs.go:276] 0 containers: []
	W0229 19:06:14.202530    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:06:14.220001    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:06:14.265834    3012 logs.go:276] 0 containers: []
	W0229 19:06:14.266335    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:06:14.282519    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:06:14.329011    3012 logs.go:276] 0 containers: []
	W0229 19:06:14.329011    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:06:14.339979    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:06:14.389838    3012 logs.go:276] 0 containers: []
	W0229 19:06:14.389838    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:06:14.401229    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:06:14.445841    3012 logs.go:276] 0 containers: []
	W0229 19:06:14.445921    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:06:14.457373    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:06:14.504504    3012 logs.go:276] 0 containers: []
	W0229 19:06:14.504504    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:06:14.514508    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:06:14.558508    3012 logs.go:276] 0 containers: []
	W0229 19:06:14.558508    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:06:14.566506    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:06:14.610510    3012 logs.go:276] 0 containers: []
	W0229 19:06:14.610510    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:06:14.610510    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:06:14.610510    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:06:14.648598    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:50 old-k8s-version-718400 kubelet[1698]: E0229 19:05:50.968061    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:14.661607    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:56 old-k8s-version-718400 kubelet[1698]: E0229 19:05:56.963947    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:14.668597    3012 logs.go:138] Found kubelet problem: Feb 29 19:05:59 old-k8s-version-718400 kubelet[1698]: E0229 19:05:59.958605    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:14.678603    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:04 old-k8s-version-718400 kubelet[1698]: E0229 19:06:04.957143    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:14.681601    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:05 old-k8s-version-718400 kubelet[1698]: E0229 19:06:05.945536    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:14.691624    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:09 old-k8s-version-718400 kubelet[1698]: E0229 19:06:09.951347    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:14.698609    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:11 old-k8s-version-718400 kubelet[1698]: E0229 19:06:11.951141    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:06:14.706609    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:06:14.706609    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:06:14.732613    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:06:14.732613    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:06:14.842029    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:06:14.842029    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:06:14.842029    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:06:14.874764    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:06:14.874869    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:06:14.961674    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:06:14.961755    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:06:14.961907    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:06:14.961958    3012 out.go:239]   Feb 29 19:05:59 old-k8s-version-718400 kubelet[1698]: E0229 19:05:59.958605    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:05:59 old-k8s-version-718400 kubelet[1698]: E0229 19:05:59.958605    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:14.961958    3012 out.go:239]   Feb 29 19:06:04 old-k8s-version-718400 kubelet[1698]: E0229 19:06:04.957143    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:06:04 old-k8s-version-718400 kubelet[1698]: E0229 19:06:04.957143    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:14.961958    3012 out.go:239]   Feb 29 19:06:05 old-k8s-version-718400 kubelet[1698]: E0229 19:06:05.945536    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:06:05 old-k8s-version-718400 kubelet[1698]: E0229 19:06:05.945536    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:14.962060    3012 out.go:239]   Feb 29 19:06:09 old-k8s-version-718400 kubelet[1698]: E0229 19:06:09.951347    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:06:09 old-k8s-version-718400 kubelet[1698]: E0229 19:06:09.951347    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:14.962100    3012 out.go:239]   Feb 29 19:06:11 old-k8s-version-718400 kubelet[1698]: E0229 19:06:11.951141    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:06:11 old-k8s-version-718400 kubelet[1698]: E0229 19:06:11.951141    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:06:14.962100    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:06:14.962164    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:06:25.014070    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:06:25.057733    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:06:25.095328    3012 logs.go:276] 0 containers: []
	W0229 19:06:25.095328    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:06:25.105870    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:06:25.167948    3012 logs.go:276] 0 containers: []
	W0229 19:06:25.167948    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:06:25.182039    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:06:25.223400    3012 logs.go:276] 0 containers: []
	W0229 19:06:25.223400    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:06:25.231400    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:06:25.270955    3012 logs.go:276] 0 containers: []
	W0229 19:06:25.270955    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:06:25.282958    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:06:25.326365    3012 logs.go:276] 0 containers: []
	W0229 19:06:25.326365    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:06:25.342119    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:06:25.378103    3012 logs.go:276] 0 containers: []
	W0229 19:06:25.378103    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:06:25.388090    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:06:25.433083    3012 logs.go:276] 0 containers: []
	W0229 19:06:25.433083    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:06:25.445090    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:06:25.483099    3012 logs.go:276] 0 containers: []
	W0229 19:06:25.483099    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:06:25.483099    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:06:25.483099    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:06:25.534096    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:04 old-k8s-version-718400 kubelet[1698]: E0229 19:06:04.957143    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:25.539091    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:05 old-k8s-version-718400 kubelet[1698]: E0229 19:06:05.945536    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:25.552101    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:09 old-k8s-version-718400 kubelet[1698]: E0229 19:06:09.951347    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:25.561085    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:11 old-k8s-version-718400 kubelet[1698]: E0229 19:06:11.951141    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:25.572086    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:16 old-k8s-version-718400 kubelet[1698]: E0229 19:06:16.946603    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:25.578090    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:18 old-k8s-version-718400 kubelet[1698]: E0229 19:06:18.962046    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:25.587085    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:22 old-k8s-version-718400 kubelet[1698]: E0229 19:06:22.960470    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:25.589087    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:23 old-k8s-version-718400 kubelet[1698]: E0229 19:06:23.951326    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:06:25.593083    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:06:25.593083    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:06:25.623092    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:06:25.623092    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:06:25.735094    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:06:25.735094    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:06:25.735094    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:06:25.769091    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:06:25.769091    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:06:25.854649    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:06:25.854649    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:06:25.854649    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:06:25.854649    3012 out.go:239]   Feb 29 19:06:11 old-k8s-version-718400 kubelet[1698]: E0229 19:06:11.951141    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:06:11 old-k8s-version-718400 kubelet[1698]: E0229 19:06:11.951141    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:25.854649    3012 out.go:239]   Feb 29 19:06:16 old-k8s-version-718400 kubelet[1698]: E0229 19:06:16.946603    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:06:16 old-k8s-version-718400 kubelet[1698]: E0229 19:06:16.946603    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:25.854649    3012 out.go:239]   Feb 29 19:06:18 old-k8s-version-718400 kubelet[1698]: E0229 19:06:18.962046    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:06:18 old-k8s-version-718400 kubelet[1698]: E0229 19:06:18.962046    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:25.854649    3012 out.go:239]   Feb 29 19:06:22 old-k8s-version-718400 kubelet[1698]: E0229 19:06:22.960470    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:06:22 old-k8s-version-718400 kubelet[1698]: E0229 19:06:22.960470    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:25.854649    3012 out.go:239]   Feb 29 19:06:23 old-k8s-version-718400 kubelet[1698]: E0229 19:06:23.951326    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:06:23 old-k8s-version-718400 kubelet[1698]: E0229 19:06:23.951326    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:06:25.854649    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:06:25.854649    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:06:35.898839    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:06:35.953818    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:06:36.028826    3012 logs.go:276] 0 containers: []
	W0229 19:06:36.028826    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:06:36.046824    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:06:36.117746    3012 logs.go:276] 0 containers: []
	W0229 19:06:36.117746    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:06:36.133756    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:06:36.189738    3012 logs.go:276] 0 containers: []
	W0229 19:06:36.189738    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:06:36.207831    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:06:36.261730    3012 logs.go:276] 0 containers: []
	W0229 19:06:36.261730    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:06:36.278741    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:06:36.355739    3012 logs.go:276] 0 containers: []
	W0229 19:06:36.355739    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:06:36.367743    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:06:36.437055    3012 logs.go:276] 0 containers: []
	W0229 19:06:36.438050    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:06:36.456048    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:06:36.520044    3012 logs.go:276] 0 containers: []
	W0229 19:06:36.520044    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:06:36.538053    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:06:36.595060    3012 logs.go:276] 0 containers: []
	W0229 19:06:36.595060    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:06:36.595060    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:06:36.595060    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:06:36.650076    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:06:36.650076    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:06:36.771065    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:06:36.772057    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:06:36.845433    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:16 old-k8s-version-718400 kubelet[1698]: E0229 19:06:16.946603    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:36.854435    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:18 old-k8s-version-718400 kubelet[1698]: E0229 19:06:18.962046    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:36.871428    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:22 old-k8s-version-718400 kubelet[1698]: E0229 19:06:22.960470    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:36.874430    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:23 old-k8s-version-718400 kubelet[1698]: E0229 19:06:23.951326    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:36.897423    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:29 old-k8s-version-718400 kubelet[1698]: E0229 19:06:29.948784    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:36.904429    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:30 old-k8s-version-718400 kubelet[1698]: E0229 19:06:30.959459    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:36.922425    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:34 old-k8s-version-718400 kubelet[1698]: E0229 19:06:34.980791    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:06:36.929433    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:06:36.929433    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:06:36.967452    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:06:36.968434    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:06:37.179512    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:06:37.179512    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:06:37.180508    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:06:37.180508    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:06:37.180508    3012 out.go:239]   Feb 29 19:06:22 old-k8s-version-718400 kubelet[1698]: E0229 19:06:22.960470    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:06:22 old-k8s-version-718400 kubelet[1698]: E0229 19:06:22.960470    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:37.180508    3012 out.go:239]   Feb 29 19:06:23 old-k8s-version-718400 kubelet[1698]: E0229 19:06:23.951326    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:06:23 old-k8s-version-718400 kubelet[1698]: E0229 19:06:23.951326    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:37.180508    3012 out.go:239]   Feb 29 19:06:29 old-k8s-version-718400 kubelet[1698]: E0229 19:06:29.948784    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:06:29 old-k8s-version-718400 kubelet[1698]: E0229 19:06:29.948784    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:37.180508    3012 out.go:239]   Feb 29 19:06:30 old-k8s-version-718400 kubelet[1698]: E0229 19:06:30.959459    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:06:30 old-k8s-version-718400 kubelet[1698]: E0229 19:06:30.959459    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:37.180508    3012 out.go:239]   Feb 29 19:06:34 old-k8s-version-718400 kubelet[1698]: E0229 19:06:34.980791    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:06:34 old-k8s-version-718400 kubelet[1698]: E0229 19:06:34.980791    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:06:37.180508    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:06:37.180508    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:06:47.219790    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:06:47.257817    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:06:47.312340    3012 logs.go:276] 0 containers: []
	W0229 19:06:47.312340    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:06:47.326340    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:06:47.365609    3012 logs.go:276] 0 containers: []
	W0229 19:06:47.365609    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:06:47.374611    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:06:47.425287    3012 logs.go:276] 0 containers: []
	W0229 19:06:47.425345    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:06:47.437360    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:06:47.475404    3012 logs.go:276] 0 containers: []
	W0229 19:06:47.475404    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:06:47.484366    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:06:47.536123    3012 logs.go:276] 0 containers: []
	W0229 19:06:47.536123    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:06:47.545182    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:06:47.581713    3012 logs.go:276] 0 containers: []
	W0229 19:06:47.581784    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:06:47.597480    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:06:47.648147    3012 logs.go:276] 0 containers: []
	W0229 19:06:47.648147    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:06:47.657137    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:06:47.697629    3012 logs.go:276] 0 containers: []
	W0229 19:06:47.697629    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:06:47.697629    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:06:47.697629    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:06:47.789546    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:06:47.789665    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:06:47.861130    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:29 old-k8s-version-718400 kubelet[1698]: E0229 19:06:29.948784    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:47.864821    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:30 old-k8s-version-718400 kubelet[1698]: E0229 19:06:30.959459    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:47.878145    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:34 old-k8s-version-718400 kubelet[1698]: E0229 19:06:34.980791    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:47.884529    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:36 old-k8s-version-718400 kubelet[1698]: E0229 19:06:36.977110    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:47.896969    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:41 old-k8s-version-718400 kubelet[1698]: E0229 19:06:41.951601    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:47.899986    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:42 old-k8s-version-718400 kubelet[1698]: E0229 19:06:42.948706    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:47.918410    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:46 old-k8s-version-718400 kubelet[1698]: E0229 19:06:46.956422    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:06:47.922014    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:06:47.922014    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:06:47.956441    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:06:47.956527    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:06:48.086230    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:06:48.086230    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:06:48.086230    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:06:48.117230    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:06:48.117230    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:06:48.118233    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:06:48.118233    3012 out.go:239]   Feb 29 19:06:34 old-k8s-version-718400 kubelet[1698]: E0229 19:06:34.980791    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:06:34 old-k8s-version-718400 kubelet[1698]: E0229 19:06:34.980791    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:48.118233    3012 out.go:239]   Feb 29 19:06:36 old-k8s-version-718400 kubelet[1698]: E0229 19:06:36.977110    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:06:36 old-k8s-version-718400 kubelet[1698]: E0229 19:06:36.977110    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:48.118233    3012 out.go:239]   Feb 29 19:06:41 old-k8s-version-718400 kubelet[1698]: E0229 19:06:41.951601    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:06:41 old-k8s-version-718400 kubelet[1698]: E0229 19:06:41.951601    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:48.118233    3012 out.go:239]   Feb 29 19:06:42 old-k8s-version-718400 kubelet[1698]: E0229 19:06:42.948706    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:06:42 old-k8s-version-718400 kubelet[1698]: E0229 19:06:42.948706    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:48.118233    3012 out.go:239]   Feb 29 19:06:46 old-k8s-version-718400 kubelet[1698]: E0229 19:06:46.956422    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:06:46 old-k8s-version-718400 kubelet[1698]: E0229 19:06:46.956422    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:06:48.118233    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:06:48.118233    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:06:58.145429    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:06:58.184867    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:06:58.236885    3012 logs.go:276] 0 containers: []
	W0229 19:06:58.236885    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:06:58.250588    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:06:58.290867    3012 logs.go:276] 0 containers: []
	W0229 19:06:58.290867    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:06:58.307856    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:06:58.356855    3012 logs.go:276] 0 containers: []
	W0229 19:06:58.356855    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:06:58.362846    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:06:58.422059    3012 logs.go:276] 0 containers: []
	W0229 19:06:58.422059    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:06:58.432072    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:06:58.476087    3012 logs.go:276] 0 containers: []
	W0229 19:06:58.476087    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:06:58.485073    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:06:58.544787    3012 logs.go:276] 0 containers: []
	W0229 19:06:58.544862    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:06:58.562278    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:06:58.603287    3012 logs.go:276] 0 containers: []
	W0229 19:06:58.603287    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:06:58.618275    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:06:58.658218    3012 logs.go:276] 0 containers: []
	W0229 19:06:58.658268    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:06:58.658377    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:06:58.658377    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:06:58.712727    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:36 old-k8s-version-718400 kubelet[1698]: E0229 19:06:36.977110    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:58.729710    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:41 old-k8s-version-718400 kubelet[1698]: E0229 19:06:41.951601    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:58.732718    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:42 old-k8s-version-718400 kubelet[1698]: E0229 19:06:42.948706    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:58.741717    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:46 old-k8s-version-718400 kubelet[1698]: E0229 19:06:46.956422    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:58.746721    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:48 old-k8s-version-718400 kubelet[1698]: E0229 19:06:48.955189    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:58.764704    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.965669    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:58.764704    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.967040    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0229 19:06:58.767717    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:06:58.768712    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:06:58.795794    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:06:58.795794    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:06:58.914624    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:06:58.914624    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:06:58.915626    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:06:58.953091    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:06:58.953091    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:06:59.055103    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:06:59.055103    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:06:59.056102    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:06:59.056102    3012 out.go:239]   Feb 29 19:06:42 old-k8s-version-718400 kubelet[1698]: E0229 19:06:42.948706    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:06:42 old-k8s-version-718400 kubelet[1698]: E0229 19:06:42.948706    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:06:59.056102    3012 out.go:239]   Feb 29 19:06:46 old-k8s-version-718400 kubelet[1698]: E0229 19:06:46.956422    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:06:46 old-k8s-version-718400 kubelet[1698]: E0229 19:06:46.956422    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:06:59.056102    3012 out.go:239]   Feb 29 19:06:48 old-k8s-version-718400 kubelet[1698]: E0229 19:06:48.955189    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:06:48 old-k8s-version-718400 kubelet[1698]: E0229 19:06:48.955189    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:06:59.056102    3012 out.go:239]   Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.965669    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.965669    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:06:59.056102    3012 out.go:239]   Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.967040    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.967040    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0229 19:06:59.056102    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:06:59.056102    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:07:09.088559    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:07:09.127996    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:07:09.179508    3012 logs.go:276] 0 containers: []
	W0229 19:07:09.179508    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:07:09.192506    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:07:09.237505    3012 logs.go:276] 0 containers: []
	W0229 19:07:09.237505    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:07:09.246505    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:07:09.291239    3012 logs.go:276] 0 containers: []
	W0229 19:07:09.291239    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:07:09.302237    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:07:09.338242    3012 logs.go:276] 0 containers: []
	W0229 19:07:09.338242    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:07:09.347248    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:07:09.397502    3012 logs.go:276] 0 containers: []
	W0229 19:07:09.397614    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:07:09.414294    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:07:09.467176    3012 logs.go:276] 0 containers: []
	W0229 19:07:09.467176    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:07:09.481212    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:07:09.528159    3012 logs.go:276] 0 containers: []
	W0229 19:07:09.528159    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:09.543171    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:07:09.585645    3012 logs.go:276] 0 containers: []
	W0229 19:07:09.585645    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:07:09.585645    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:09.585645    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:07:09.726242    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:07:09.726242    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:07:09.726242    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:07:09.768238    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:07:09.768238    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:09.861256    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:09.861256    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:07:09.916883    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:46 old-k8s-version-718400 kubelet[1698]: E0229 19:06:46.956422    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:07:09.927853    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:48 old-k8s-version-718400 kubelet[1698]: E0229 19:06:48.955189    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:07:09.957868    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.965669    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:07:09.958860    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.967040    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:07:09.970851    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:00 old-k8s-version-718400 kubelet[1698]: E0229 19:07:00.993298    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:07:09.971857    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:00 old-k8s-version-718400 kubelet[1698]: E0229 19:07:00.995993    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:07:09.989874    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:09.989874    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:10.026693    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:07:10.026693    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:07:10.026693    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:07:10.026693    3012 out.go:239]   Feb 29 19:06:48 old-k8s-version-718400 kubelet[1698]: E0229 19:06:48.955189    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:06:48 old-k8s-version-718400 kubelet[1698]: E0229 19:06:48.955189    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:07:10.026693    3012 out.go:239]   Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.965669    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.965669    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:07:10.026693    3012 out.go:239]   Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.967040    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.967040    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:07:10.026693    3012 out.go:239]   Feb 29 19:07:00 old-k8s-version-718400 kubelet[1698]: E0229 19:07:00.993298    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:07:00 old-k8s-version-718400 kubelet[1698]: E0229 19:07:00.993298    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:07:10.026693    3012 out.go:239]   Feb 29 19:07:00 old-k8s-version-718400 kubelet[1698]: E0229 19:07:00.995993    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:07:00 old-k8s-version-718400 kubelet[1698]: E0229 19:07:00.995993    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:07:10.026693    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:07:10.026693    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:07:20.052852    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:07:20.086665    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:07:20.132116    3012 logs.go:276] 0 containers: []
	W0229 19:07:20.132116    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:07:20.142256    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:07:20.185746    3012 logs.go:276] 0 containers: []
	W0229 19:07:20.185928    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:07:20.195117    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:07:20.233797    3012 logs.go:276] 0 containers: []
	W0229 19:07:20.233863    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:07:20.242811    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:07:20.281374    3012 logs.go:276] 0 containers: []
	W0229 19:07:20.281374    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:07:20.290654    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:07:20.333946    3012 logs.go:276] 0 containers: []
	W0229 19:07:20.333946    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:07:20.343119    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:07:20.386549    3012 logs.go:276] 0 containers: []
	W0229 19:07:20.386549    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:07:20.395902    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:07:20.435834    3012 logs.go:276] 0 containers: []
	W0229 19:07:20.435951    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:20.445487    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:07:20.488135    3012 logs.go:276] 0 containers: []
	W0229 19:07:20.488203    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:07:20.488203    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:20.488203    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:20.516944    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:20.518007    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:07:20.652005    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:07:20.652005    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:07:20.652005    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:07:20.693014    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:07:20.693014    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:20.779021    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:20.779021    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:07:20.830029    3012 logs.go:138] Found kubelet problem: Feb 29 19:06:56 old-k8s-version-718400 kubelet[1698]: E0229 19:06:56.967040    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:07:20.841002    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:00 old-k8s-version-718400 kubelet[1698]: E0229 19:07:00.993298    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:07:20.841989    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:00 old-k8s-version-718400 kubelet[1698]: E0229 19:07:00.995993    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:07:20.865002    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:09 old-k8s-version-718400 kubelet[1698]: E0229 19:07:09.956964    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:07:20.866000    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:09 old-k8s-version-718400 kubelet[1698]: E0229 19:07:09.958713    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:07:20.871999    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:12 old-k8s-version-718400 kubelet[1698]: E0229 19:07:12.956646    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:07:20.874996    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:13 old-k8s-version-718400 kubelet[1698]: E0229 19:07:13.960049    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:07:20.897468    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:07:20.897468    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:07:20.897468    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:07:20.897468    3012 out.go:239]   Feb 29 19:07:00 old-k8s-version-718400 kubelet[1698]: E0229 19:07:00.995993    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:07:00 old-k8s-version-718400 kubelet[1698]: E0229 19:07:00.995993    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:07:20.897468    3012 out.go:239]   Feb 29 19:07:09 old-k8s-version-718400 kubelet[1698]: E0229 19:07:09.956964    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:07:09 old-k8s-version-718400 kubelet[1698]: E0229 19:07:09.956964    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:07:20.897468    3012 out.go:239]   Feb 29 19:07:09 old-k8s-version-718400 kubelet[1698]: E0229 19:07:09.958713    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:07:09 old-k8s-version-718400 kubelet[1698]: E0229 19:07:09.958713    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:07:20.897468    3012 out.go:239]   Feb 29 19:07:12 old-k8s-version-718400 kubelet[1698]: E0229 19:07:12.956646    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:07:12 old-k8s-version-718400 kubelet[1698]: E0229 19:07:12.956646    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:07:20.897468    3012 out.go:239]   Feb 29 19:07:13 old-k8s-version-718400 kubelet[1698]: E0229 19:07:13.960049    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:07:13 old-k8s-version-718400 kubelet[1698]: E0229 19:07:13.960049    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:07:20.897468    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:07:20.897468    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:07:30.944889    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:07:30.991891    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:07:31.047883    3012 logs.go:276] 0 containers: []
	W0229 19:07:31.047883    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:07:31.059879    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:07:31.118905    3012 logs.go:276] 0 containers: []
	W0229 19:07:31.118905    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:07:31.131883    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:07:31.189892    3012 logs.go:276] 0 containers: []
	W0229 19:07:31.189892    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:07:31.204881    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:07:31.261896    3012 logs.go:276] 0 containers: []
	W0229 19:07:31.261896    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:07:31.276915    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:07:31.333907    3012 logs.go:276] 0 containers: []
	W0229 19:07:31.333907    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:07:31.349895    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:07:31.401899    3012 logs.go:276] 0 containers: []
	W0229 19:07:31.401899    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:07:31.416906    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:07:31.464404    3012 logs.go:276] 0 containers: []
	W0229 19:07:31.464404    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:31.478410    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:07:31.536401    3012 logs.go:276] 0 containers: []
	W0229 19:07:31.536401    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:07:31.536401    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:31.536401    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:07:31.599405    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:09 old-k8s-version-718400 kubelet[1698]: E0229 19:07:09.956964    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:07:31.600418    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:09 old-k8s-version-718400 kubelet[1698]: E0229 19:07:09.958713    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:07:31.612404    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:12 old-k8s-version-718400 kubelet[1698]: E0229 19:07:12.956646    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:07:31.617401    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:13 old-k8s-version-718400 kubelet[1698]: E0229 19:07:13.960049    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:07:31.646402    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:21 old-k8s-version-718400 kubelet[1698]: E0229 19:07:21.949353    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:07:31.656436    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:23 old-k8s-version-718400 kubelet[1698]: E0229 19:07:23.957018    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:07:31.657404    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:23 old-k8s-version-718400 kubelet[1698]: E0229 19:07:23.958396    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:07:31.665395    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:25 old-k8s-version-718400 kubelet[1698]: E0229 19:07:25.947791    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:07:31.689408    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:31.689408    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:31.727397    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:31.728417    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:07:31.923392    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:07:31.924424    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:07:31.924424    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:07:31.971383    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:07:31.971383    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:32.106373    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:07:32.106373    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:07:32.106373    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:07:32.106373    3012 out.go:239]   Feb 29 19:07:13 old-k8s-version-718400 kubelet[1698]: E0229 19:07:13.960049    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:07:13 old-k8s-version-718400 kubelet[1698]: E0229 19:07:13.960049    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:07:32.106373    3012 out.go:239]   Feb 29 19:07:21 old-k8s-version-718400 kubelet[1698]: E0229 19:07:21.949353    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:07:21 old-k8s-version-718400 kubelet[1698]: E0229 19:07:21.949353    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:07:32.106373    3012 out.go:239]   Feb 29 19:07:23 old-k8s-version-718400 kubelet[1698]: E0229 19:07:23.957018    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:07:23 old-k8s-version-718400 kubelet[1698]: E0229 19:07:23.957018    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:07:32.106373    3012 out.go:239]   Feb 29 19:07:23 old-k8s-version-718400 kubelet[1698]: E0229 19:07:23.958396    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:07:23 old-k8s-version-718400 kubelet[1698]: E0229 19:07:23.958396    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:07:32.106373    3012 out.go:239]   Feb 29 19:07:25 old-k8s-version-718400 kubelet[1698]: E0229 19:07:25.947791    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:07:25 old-k8s-version-718400 kubelet[1698]: E0229 19:07:25.947791    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:07:32.106373    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:07:32.106373    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:07:42.131283    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:07:42.174520    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:07:42.220519    3012 logs.go:276] 0 containers: []
	W0229 19:07:42.220519    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:07:42.233510    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:07:42.282521    3012 logs.go:276] 0 containers: []
	W0229 19:07:42.282521    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:07:42.293507    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:07:42.340503    3012 logs.go:276] 0 containers: []
	W0229 19:07:42.340503    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:07:42.352510    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:07:42.393514    3012 logs.go:276] 0 containers: []
	W0229 19:07:42.393514    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:07:42.403505    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:07:42.446514    3012 logs.go:276] 0 containers: []
	W0229 19:07:42.446514    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:07:42.457506    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:07:42.501521    3012 logs.go:276] 0 containers: []
	W0229 19:07:42.501521    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:07:42.514511    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:07:42.556510    3012 logs.go:276] 0 containers: []
	W0229 19:07:42.556510    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:07:42.566539    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:07:42.602511    3012 logs.go:276] 0 containers: []
	W0229 19:07:42.602511    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:07:42.602511    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:07:42.602511    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:07:42.648521    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:21 old-k8s-version-718400 kubelet[1698]: E0229 19:07:21.949353    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:07:42.654518    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:23 old-k8s-version-718400 kubelet[1698]: E0229 19:07:23.957018    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:07:42.654518    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:23 old-k8s-version-718400 kubelet[1698]: E0229 19:07:23.958396    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:07:42.660522    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:25 old-k8s-version-718400 kubelet[1698]: E0229 19:07:25.947791    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:07:42.681517    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:32 old-k8s-version-718400 kubelet[1698]: E0229 19:07:32.972195    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:07:42.688520    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:35 old-k8s-version-718400 kubelet[1698]: E0229 19:07:35.950715    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:07:42.692527    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:36 old-k8s-version-718400 kubelet[1698]: E0229 19:07:36.950360    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:07:42.695524    3012 logs.go:138] Found kubelet problem: Feb 29 19:07:37 old-k8s-version-718400 kubelet[1698]: E0229 19:07:37.960546    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:07:42.707250    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:07:42.707790    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:07:42.734265    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:07:42.734265    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:07:42.844294    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:07:42.844294    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:07:42.844294    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:07:42.877902    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:07:42.878875    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:07:42.957880    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:07:42.957880    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 19:07:42.957880    3012 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0229 19:07:42.957880    3012 out.go:239]   Feb 29 19:07:25 old-k8s-version-718400 kubelet[1698]: E0229 19:07:25.947791    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:07:25 old-k8s-version-718400 kubelet[1698]: E0229 19:07:25.947791    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:07:42.957880    3012 out.go:239]   Feb 29 19:07:32 old-k8s-version-718400 kubelet[1698]: E0229 19:07:32.972195    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	  Feb 29 19:07:32 old-k8s-version-718400 kubelet[1698]: E0229 19:07:32.972195    1698 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:07:42.957880    3012 out.go:239]   Feb 29 19:07:35 old-k8s-version-718400 kubelet[1698]: E0229 19:07:35.950715    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	  Feb 29 19:07:35 old-k8s-version-718400 kubelet[1698]: E0229 19:07:35.950715    1698 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:07:42.957880    3012 out.go:239]   Feb 29 19:07:36 old-k8s-version-718400 kubelet[1698]: E0229 19:07:36.950360    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	  Feb 29 19:07:36 old-k8s-version-718400 kubelet[1698]: E0229 19:07:36.950360    1698 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:07:42.957880    3012 out.go:239]   Feb 29 19:07:37 old-k8s-version-718400 kubelet[1698]: E0229 19:07:37.960546    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	  Feb 29 19:07:37 old-k8s-version-718400 kubelet[1698]: E0229 19:07:37.960546    1698 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	I0229 19:07:42.957880    3012 out.go:304] Setting ErrFile to fd 1560...
	I0229 19:07:42.957880    3012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:07:52.996587    3012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:07:53.031951    3012 kubeadm.go:640] restartCluster took 4m18.8855286s
	W0229 19:07:53.031951    3012 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 19:07:53.031951    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 19:07:55.232253    3012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (2.2002844s)
	I0229 19:07:55.252276    3012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:07:55.313730    3012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:07:55.337709    3012 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 19:07:55.352699    3012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:07:55.369700    3012 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:07:55.369700    3012 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 19:07:55.744404    3012 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 19:07:55.744404    3012 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0229 19:07:55.868406    3012 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0229 19:07:56.062657    3012 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:11:58.244541    3012 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:11:58.244967    3012 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:11:58.250482    3012 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:11:58.250482    3012 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:11:58.251219    3012 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:11:58.251219    3012 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:11:58.251219    3012 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:11:58.252019    3012 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:11:58.252074    3012 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:11:58.252074    3012 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:11:58.252074    3012 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:11:58.283475    3012 out.go:204]   - Generating certificates and keys ...
	I0229 19:11:58.286640    3012 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:11:58.286817    3012 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:11:58.287042    3012 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:11:58.287205    3012 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:11:58.287474    3012 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:11:58.287654    3012 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:11:58.287793    3012 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:11:58.288060    3012 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:11:58.288241    3012 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:11:58.288447    3012 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:11:58.288574    3012 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:11:58.288762    3012 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:11:58.288952    3012 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:11:58.289118    3012 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:11:58.289248    3012 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:11:58.289465    3012 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:11:58.289714    3012 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:11:58.293493    3012 out.go:204]   - Booting up control plane ...
	I0229 19:11:58.293493    3012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:11:58.294182    3012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:11:58.294279    3012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:11:58.294279    3012 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:11:58.295064    3012 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:11:58.295064    3012 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:11:58.295064    3012 kubeadm.go:322] 
	I0229 19:11:58.295064    3012 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:11:58.295064    3012 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:11:58.295064    3012 kubeadm.go:322] 
	I0229 19:11:58.295064    3012 kubeadm.go:322] This error is likely caused by:
	I0229 19:11:58.295064    3012 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:11:58.296190    3012 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:11:58.296190    3012 kubeadm.go:322] 
	I0229 19:11:58.296369    3012 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:11:58.296369    3012 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:11:58.296369    3012 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:11:58.296369    3012 kubeadm.go:322] 
	I0229 19:11:58.296922    3012 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:11:58.297011    3012 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:11:58.297011    3012 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:11:58.297011    3012 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:11:58.297802    3012 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:11:58.297980    3012 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 19:11:58.298098    3012 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 19:11:58.299401    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 19:12:02.633696    3012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (4.334207s)
	I0229 19:12:02.652760    3012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:12:02.679884    3012 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 19:12:02.693947    3012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:12:02.716725    3012 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:12:02.717279    3012 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 19:12:03.051847    3012 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 19:12:03.051847    3012 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0229 19:12:03.156408    3012 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
	I0229 19:12:03.349836    3012 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:16:05.094764    3012 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:16:05.095215    3012 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:16:05.099756    3012 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:16:05.100306    3012 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:16:05.100596    3012 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:16:05.100925    3012 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:16:05.100925    3012 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:16:05.101471    3012 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:16:05.101798    3012 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:16:05.101940    3012 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:16:05.102008    3012 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:16:05.107021    3012 out.go:204]   - Generating certificates and keys ...
	I0229 19:16:05.107311    3012 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:16:05.107636    3012 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:16:05.107851    3012 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:16:05.108092    3012 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:16:05.108199    3012 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:16:05.108199    3012 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:16:05.108199    3012 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:16:05.108852    3012 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:16:05.109114    3012 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:16:05.109396    3012 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:16:05.109536    3012 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:16:05.109963    3012 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:16:05.110112    3012 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:16:05.110409    3012 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:16:05.110687    3012 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:16:05.110835    3012 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:16:05.111381    3012 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:16:05.113856    3012 out.go:204]   - Booting up control plane ...
	I0229 19:16:05.114066    3012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:16:05.114066    3012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:16:05.114066    3012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:16:05.115027    3012 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:16:05.116054    3012 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:16:05.116054    3012 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:16:05.116054    3012 kubeadm.go:322] 
	I0229 19:16:05.116054    3012 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:16:05.116054    3012 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:16:05.116054    3012 kubeadm.go:322] 
	I0229 19:16:05.116751    3012 kubeadm.go:322] This error is likely caused by:
	I0229 19:16:05.116871    3012 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:16:05.116979    3012 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:16:05.116979    3012 kubeadm.go:322] 
	I0229 19:16:05.117821    3012 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:16:05.118155    3012 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:16:05.118181    3012 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:16:05.118181    3012 kubeadm.go:322] 
	I0229 19:16:05.118181    3012 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:16:05.118181    3012 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:16:05.118968    3012 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:16:05.119090    3012 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:16:05.119329    3012 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:16:05.119561    3012 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 19:16:05.119651    3012 kubeadm.go:406] StartCluster complete in 12m31.0799276s
	I0229 19:16:05.129772    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:16:05.171665    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.171665    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:16:05.179666    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:16:05.229862    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.229862    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:16:05.249214    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:16:05.307193    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.307193    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:16:05.316151    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:16:05.362653    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.362653    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:16:05.370651    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:16:05.435306    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.435363    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:16:05.446405    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:16:05.497155    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.497695    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:16:05.510707    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:16:05.553447    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.553447    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:16:05.563373    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:16:05.608587    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.608587    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:16:05.608587    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:16:05.608587    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:16:05.762137    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:16:05.762137    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:16:05.762137    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:16:05.794822    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:16:05.794822    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:16:05.881247    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:16:05.881247    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:16:05.936905    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:44 old-k8s-version-718400 kubelet[11316]: E0229 19:15:44.334761   11316 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:16:05.943928    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:47 old-k8s-version-718400 kubelet[11316]: E0229 19:15:47.331645   11316 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:16:05.955775    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:52 old-k8s-version-718400 kubelet[11316]: E0229 19:15:52.364501   11316 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:16:05.959784    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:54 old-k8s-version-718400 kubelet[11316]: E0229 19:15:54.343799   11316 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:16:05.965770    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:56 old-k8s-version-718400 kubelet[11316]: E0229 19:15:56.333115   11316 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:16:05.979274    3012 logs.go:138] Found kubelet problem: Feb 29 19:16:02 old-k8s-version-718400 kubelet[11316]: E0229 19:16:02.344608   11316 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:16:05.982275    3012 logs.go:138] Found kubelet problem: Feb 29 19:16:03 old-k8s-version-718400 kubelet[11316]: E0229 19:16:03.341406   11316 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0229 19:16:05.988288    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:16:05.988288    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0229 19:16:06.015433    3012 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 19:16:06.015433    3012 out.go:239] * 
	* 
	W0229 19:16:06.015433    3012 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:16:06.015433    3012 out.go:239] * 
	* 
	W0229 19:16:06.017885    3012 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 19:16:06.022944    3012 out.go:177] X Problems detected in kubelet:
	I0229 19:16:06.028715    3012 out.go:177]   Feb 29 19:15:44 old-k8s-version-718400 kubelet[11316]: E0229 19:15:44.334761   11316 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:16:06.033657    3012 out.go:177]   Feb 29 19:15:47 old-k8s-version-718400 kubelet[11316]: E0229 19:15:47.331645   11316 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:16:06.038773    3012 out.go:177]   Feb 29 19:15:52 old-k8s-version-718400 kubelet[11316]: E0229 19:15:52.364501   11316 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0229 19:16:06.045519    3012 out.go:177] 
	W0229 19:16:06.047748    3012 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:16:06.047748    3012 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 19:16:06.047748    3012 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 19:16:06.051267    3012 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-718400 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-718400
helpers_test.go:235: (dbg) docker inspect old-k8s-version-718400:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95",
	        "Created": "2024-02-29T18:51:53.042271456Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T19:02:50.267481317Z",
	            "FinishedAt": "2024-02-29T19:02:44.626053594Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/hostname",
	        "HostsPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/hosts",
	        "LogPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95-json.log",
	        "Name": "/old-k8s-version-718400",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-718400:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-718400",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e-init/diff:/var/lib/docker/overlay2/93b520212bad25395214c0a2a80384ead8baa0a1e04ab69f20509c9ef347fcc7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-718400",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-718400/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-718400",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-718400",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-718400",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b08a179fc6ee320f3dde5003178e9a65d1c408322d65542c8d1770780903a6d",
	            "SandboxKey": "/var/run/docker/netns/5b08a179fc6e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60232"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60233"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60234"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60235"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60236"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-718400": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "12e46b2d6b8f",
	                        "old-k8s-version-718400"
	                    ],
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "NetworkID": "b75bbe82b7c3705bcc35a14b3795bdbd848e1be9ef602ed5c81af9b5c594adc5",
	                    "EndpointID": "38b8832480fb82510fe0ee90cee267741a9d04a575de6d049eb25622afbbf6ed",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-718400",
	                        "12e46b2d6b8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400: exit status 2 (1.3269434s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:16:07.408746   12360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-718400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-718400 logs -n 25: (1.9609827s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p bridge-652900 sudo                  | bridge-652900             | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:14 UTC | 29 Feb 24 19:14 UTC |
	|         | systemctl cat containerd               |                           |                   |         |                     |                     |
	|         | --no-pager                             |                           |                   |         |                     |                     |
	| ssh     | -p bridge-652900 sudo cat              | bridge-652900             | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:14 UTC | 29 Feb 24 19:14 UTC |
	|         | /lib/systemd/system/containerd.service |                           |                   |         |                     |                     |
	| delete  | -p enable-default-cni-652900           | enable-default-cni-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:14 UTC | 29 Feb 24 19:14 UTC |
	| ssh     | -p bridge-652900 sudo cat              | bridge-652900             | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:14 UTC | 29 Feb 24 19:14 UTC |
	|         | /etc/containerd/config.toml            |                           |                   |         |                     |                     |
	| ssh     | -p bridge-652900 sudo                  | bridge-652900             | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:14 UTC | 29 Feb 24 19:14 UTC |
	|         | containerd config dump                 |                           |                   |         |                     |                     |
	| ssh     | -p bridge-652900 sudo                  | bridge-652900             | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:14 UTC |                     |
	|         | systemctl status crio --all            |                           |                   |         |                     |                     |
	|         | --full --no-pager                      |                           |                   |         |                     |                     |
	| ssh     | -p bridge-652900 sudo                  | bridge-652900             | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:14 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |                   |         |                     |                     |
	| ssh     | -p bridge-652900 sudo find             | bridge-652900             | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:14 UTC | 29 Feb 24 19:14 UTC |
	|         | /etc/crio -type f -exec sh -c          |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |                   |         |                     |                     |
	| ssh     | -p bridge-652900 sudo crio             | bridge-652900             | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:14 UTC | 29 Feb 24 19:14 UTC |
	|         | config                                 |                           |                   |         |                     |                     |
	| delete  | -p bridge-652900                       | bridge-652900             | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:15 UTC | 29 Feb 24 19:15 UTC |
	| ssh     | -p kubenet-652900 pgrep -a             | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:15 UTC | 29 Feb 24 19:15 UTC |
	|         | kubelet                                |                           |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo cat             | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:15 UTC | 29 Feb 24 19:15 UTC |
	|         | /etc/nsswitch.conf                     |                           |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo cat             | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:15 UTC | 29 Feb 24 19:15 UTC |
	|         | /etc/hosts                             |                           |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo cat             | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:15 UTC | 29 Feb 24 19:15 UTC |
	|         | /etc/resolv.conf                       |                           |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo crictl          | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:15 UTC | 29 Feb 24 19:15 UTC |
	|         | pods                                   |                           |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo crictl          | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:15 UTC | 29 Feb 24 19:15 UTC |
	|         | ps --all                               |                           |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo find            | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:15 UTC | 29 Feb 24 19:15 UTC |
	|         | /etc/cni -type f -exec sh -c           |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo ip a s          | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:15 UTC | 29 Feb 24 19:16 UTC |
	| ssh     | -p kubenet-652900 sudo ip r s          | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	| ssh     | -p kubenet-652900 sudo                 | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | iptables-save                          |                           |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                 | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | iptables -t nat -L -n -v               |                           |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                 | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | systemctl status kubelet --all         |                           |                   |         |                     |                     |
	|         | --full --no-pager                      |                           |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                 | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | systemctl cat kubelet                  |                           |                   |         |                     |                     |
	|         | --no-pager                             |                           |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                 | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | journalctl -xeu kubelet --all          |                           |                   |         |                     |                     |
	|         | --full --no-pager                      |                           |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo cat             | kubenet-652900            | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf           |                           |                   |         |                     |                     |
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 19:13:13
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 19:13:13.114953    2612 out.go:291] Setting OutFile to fd 1580 ...
	I0229 19:13:13.114953    2612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:13:13.115937    2612 out.go:304] Setting ErrFile to fd 1056...
	I0229 19:13:13.115937    2612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:13:13.146477    2612 out.go:298] Setting JSON to false
	I0229 19:13:13.150293    2612 start.go:129] hostinfo: {"hostname":"minikube7","uptime":11953,"bootTime":1709222039,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 19:13:13.150293    2612 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 19:13:13.160115    2612 out.go:177] * [kubenet-652900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 19:13:13.165656    2612 notify.go:220] Checking for updates...
	I0229 19:13:13.170396    2612 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 19:13:13.173955    2612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:13:13.178382    2612 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 19:13:13.185388    2612 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:13:13.191391    2612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:13:13.195418    2612 config.go:182] Loaded profile config "bridge-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:13:13.196386    2612 config.go:182] Loaded profile config "enable-default-cni-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:13:13.196386    2612 config.go:182] Loaded profile config "old-k8s-version-718400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 19:13:13.196386    2612 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:13:13.508755    2612 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 19:13:13.519462    2612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 19:13:13.913922    2612 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:93 SystemTime:2024-02-29 19:13:13.872451135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 19:13:13.917011    2612 out.go:177] * Using the docker driver based on user configuration
	I0229 19:13:13.923712    2612 start.go:299] selected driver: docker
	I0229 19:13:13.923712    2612 start.go:903] validating driver "docker" against <nil>
	I0229 19:13:13.923793    2612 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:13:14.015077    2612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 19:13:14.447807    2612 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:93 SystemTime:2024-02-29 19:13:14.38915421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0229 19:13:14.448913    2612 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 19:13:14.453412    2612 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 19:13:14.458417    2612 out.go:177] * Using Docker Desktop driver with root privileges
	I0229 19:13:14.461396    2612 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0229 19:13:14.461396    2612 start_flags.go:323] config:
	{Name:kubenet-652900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-652900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:13:14.466501    2612 out.go:177] * Starting control plane node kubenet-652900 in cluster kubenet-652900
	I0229 19:13:14.471913    2612 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 19:13:14.476676    2612 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 19:13:14.483392    2612 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:13:14.483392    2612 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 19:13:14.483392    2612 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 19:13:14.483392    2612 cache.go:56] Caching tarball of preloaded images
	I0229 19:13:14.484389    2612 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 19:13:14.484389    2612 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 19:13:14.484389    2612 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\config.json ...
	I0229 19:13:14.484389    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\config.json: {Name:mk2050ee8b43989e221b5fd49b0f1b7245551d63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:14.709795    2612 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 19:13:14.709851    2612 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 19:13:14.709959    2612 cache.go:194] Successfully downloaded all kic artifacts
	I0229 19:13:14.709959    2612 start.go:365] acquiring machines lock for kubenet-652900: {Name:mk79ba672a2aee00cda4bd47db1909e85635aa9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:13:14.710463    2612 start.go:369] acquired machines lock for "kubenet-652900" in 503.6µs
	I0229 19:13:14.710463    2612 start.go:93] Provisioning new machine with config: &{Name:kubenet-652900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-652900 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 19:13:14.710463    2612 start.go:125] createHost starting for "" (driver="docker")
	I0229 19:13:10.111596   14472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:13:10.131123   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=enable-default-cni-652900 minikube.k8s.io/updated_at=2024_02_29T19_13_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:10.132051   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:10.986069   14472 ops.go:34] apiserver oom_adj: -16
	I0229 19:13:11.006886   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:11.506093   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:12.003892   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:12.505744   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:13.006338   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:13.516251   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:14.015077   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:14.522305   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:15.009431   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:12.115411    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:12.617769    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:13.109940    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:13.608490    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:14.109831    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:14.614045    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:15.118067    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:15.623488    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:16.118778    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:16.617128    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:14.720884    2612 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0229 19:13:14.720884    2612 start.go:159] libmachine.API.Create for "kubenet-652900" (driver="docker")
	I0229 19:13:14.720884    2612 client.go:168] LocalClient.Create starting
	I0229 19:13:14.721781    2612 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0229 19:13:14.721781    2612 main.go:141] libmachine: Decoding PEM data...
	I0229 19:13:14.721781    2612 main.go:141] libmachine: Parsing certificate...
	I0229 19:13:14.721781    2612 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0229 19:13:14.721781    2612 main.go:141] libmachine: Decoding PEM data...
	I0229 19:13:14.721781    2612 main.go:141] libmachine: Parsing certificate...
	I0229 19:13:14.732778    2612 cli_runner.go:164] Run: docker network inspect kubenet-652900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 19:13:14.908726    2612 cli_runner.go:211] docker network inspect kubenet-652900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 19:13:14.918355    2612 network_create.go:281] running [docker network inspect kubenet-652900] to gather additional debugging logs...
	I0229 19:13:14.918458    2612 cli_runner.go:164] Run: docker network inspect kubenet-652900
	W0229 19:13:15.095868    2612 cli_runner.go:211] docker network inspect kubenet-652900 returned with exit code 1
	I0229 19:13:15.095962    2612 network_create.go:284] error running [docker network inspect kubenet-652900]: docker network inspect kubenet-652900: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-652900 not found
	I0229 19:13:15.096018    2612 network_create.go:286] output of [docker network inspect kubenet-652900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-652900 not found
	
	** /stderr **
	I0229 19:13:15.107891    2612 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 19:13:15.320691    2612 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 19:13:15.351530    2612 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 19:13:15.383291    2612 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 19:13:15.404839    2612 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023a7200}
	I0229 19:13:15.404839    2612 network_create.go:124] attempt to create docker network kubenet-652900 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0229 19:13:15.412881    2612 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-652900 kubenet-652900
	I0229 19:13:15.717904    2612 network_create.go:108] docker network kubenet-652900 192.168.76.0/24 created
	I0229 19:13:15.718057    2612 kic.go:121] calculated static IP "192.168.76.2" for the "kubenet-652900" container
	I0229 19:13:15.739887    2612 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 19:13:15.930917    2612 cli_runner.go:164] Run: docker volume create kubenet-652900 --label name.minikube.sigs.k8s.io=kubenet-652900 --label created_by.minikube.sigs.k8s.io=true
	I0229 19:13:16.120778    2612 oci.go:103] Successfully created a docker volume kubenet-652900
	I0229 19:13:16.129771    2612 cli_runner.go:164] Run: docker run --rm --name kubenet-652900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-652900 --entrypoint /usr/bin/test -v kubenet-652900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 19:13:15.504908   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:16.007404   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:16.506464   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:17.008405   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:17.509333   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:18.014649   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:18.501539   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:19.002733   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:19.507820   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:20.005877   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:17.129975    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:17.607209    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:18.109391    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:18.612090    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:19.119804    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:19.618529    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:20.121893    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:20.623118    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:21.116510    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:21.613786    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:21.820704    7948 kubeadm.go:1088] duration metric: took 12.123241s to wait for elevateKubeSystemPrivileges.
	I0229 19:13:21.820704    7948 kubeadm.go:406] StartCluster complete in 29.1205832s
	I0229 19:13:21.820704    7948 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:21.821717    7948 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 19:13:21.823696    7948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:21.824699    7948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:13:21.824699    7948 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:13:21.824699    7948 addons.go:69] Setting storage-provisioner=true in profile "bridge-652900"
	I0229 19:13:21.824699    7948 addons.go:69] Setting default-storageclass=true in profile "bridge-652900"
	I0229 19:13:21.825718    7948 addons.go:234] Setting addon storage-provisioner=true in "bridge-652900"
	I0229 19:13:21.825718    7948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-652900"
	I0229 19:13:21.825718    7948 host.go:66] Checking if "bridge-652900" exists ...
	I0229 19:13:21.825718    7948 config.go:182] Loaded profile config "bridge-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:13:21.853287    7948 cli_runner.go:164] Run: docker container inspect bridge-652900 --format={{.State.Status}}
	I0229 19:13:21.854304    7948 cli_runner.go:164] Run: docker container inspect bridge-652900 --format={{.State.Status}}
	I0229 19:13:22.080475    7948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:13:22.086225    7948 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:13:22.086225    7948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:13:22.101285    7948 addons.go:234] Setting addon default-storageclass=true in "bridge-652900"
	I0229 19:13:22.102281    7948 host.go:66] Checking if "bridge-652900" exists ...
	I0229 19:13:18.776378    2612 cli_runner.go:217] Completed: docker run --rm --name kubenet-652900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-652900 --entrypoint /usr/bin/test -v kubenet-652900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (2.6465875s)
	I0229 19:13:18.776378    2612 oci.go:107] Successfully prepared a docker volume kubenet-652900
	I0229 19:13:18.776378    2612 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:13:18.776378    2612 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 19:13:18.784411    2612 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-652900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 19:13:22.107311    7948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-652900
	I0229 19:13:22.136294    7948 cli_runner.go:164] Run: docker container inspect bridge-652900 --format={{.State.Status}}
	I0229 19:13:22.322083    7948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61228 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-652900\id_rsa Username:docker}
	I0229 19:13:22.368078    7948 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:13:22.368078    7948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:13:22.378084    7948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-652900
	I0229 19:13:22.493315    7948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:13:22.589201    7948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61228 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-652900\id_rsa Username:docker}
	I0229 19:13:22.833831    7948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:13:23.493584    7948 kapi.go:248] "coredns" deployment in "kube-system" namespace and "bridge-652900" context rescaled to 1 replicas
	I0229 19:13:23.493676    7948 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 19:13:23.507884    7948 out.go:177] * Verifying Kubernetes components...
	I0229 19:13:23.505892    7948 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.6811803s)
	I0229 19:13:20.510033   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:21.023547   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:21.505737   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:22.017024   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:23.500875   14472 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.4838401s)
	I0229 19:13:23.500875   14472 kubeadm.go:1088] duration metric: took 13.3881769s to wait for elevateKubeSystemPrivileges.
	I0229 19:13:23.500875   14472 kubeadm.go:406] StartCluster complete in 31.0104072s
	I0229 19:13:23.500875   14472 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:23.500875   14472 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 19:13:23.507884   14472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:23.509894   14472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:13:23.509894   14472 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:13:23.509894   14472 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-652900"
	I0229 19:13:23.509894   14472 addons.go:234] Setting addon storage-provisioner=true in "enable-default-cni-652900"
	I0229 19:13:23.510900   14472 host.go:66] Checking if "enable-default-cni-652900" exists ...
	I0229 19:13:23.510900   14472 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-652900"
	I0229 19:13:23.510900   14472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-652900"
	I0229 19:13:23.510900   14472 config.go:182] Loaded profile config "enable-default-cni-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:13:23.536885   14472 cli_runner.go:164] Run: docker container inspect enable-default-cni-652900 --format={{.State.Status}}
	I0229 19:13:23.537877   14472 cli_runner.go:164] Run: docker container inspect enable-default-cni-652900 --format={{.State.Status}}
	I0229 19:13:23.756407   14472 addons.go:234] Setting addon default-storageclass=true in "enable-default-cni-652900"
	I0229 19:13:23.756530   14472 host.go:66] Checking if "enable-default-cni-652900" exists ...
	I0229 19:13:23.776475   14472 cli_runner.go:164] Run: docker container inspect enable-default-cni-652900 --format={{.State.Status}}
	I0229 19:13:23.926146   14472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:13:24.085413   14472 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:13:24.085509   14472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:13:24.105407   14472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-652900
	I0229 19:13:24.128905   14472 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:13:24.128905   14472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:13:24.139921   14472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-652900
	I0229 19:13:24.306546   14472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61236 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\enable-default-cni-652900\id_rsa Username:docker}
	I0229 19:13:24.348742   14472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61236 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\enable-default-cni-652900\id_rsa Username:docker}
	I0229 19:13:24.457232   14472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:13:24.501584   14472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:13:24.604676   14472 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.0947737s)
	I0229 19:13:24.604676   14472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:13:24.917705   14472 kapi.go:248] "coredns" deployment in "kube-system" namespace and "enable-default-cni-652900" context rescaled to 1 replicas
	I0229 19:13:24.917705   14472 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 19:13:24.925429   14472 out.go:177] * Verifying Kubernetes components...
	I0229 19:13:24.948114   14472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:13:23.507884    7948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:13:23.528883    7948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:13:25.164385    7948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.3305369s)
	I0229 19:13:25.164385    7948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.6710504s)
	I0229 19:13:25.164385    7948 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.6534727s)
	I0229 19:13:25.164385    7948 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0229 19:13:25.164385    7948 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.6354895s)
	I0229 19:13:25.177214    7948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-652900
	I0229 19:13:25.212476    7948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 19:13:25.217487    7948 addons.go:505] enable addons completed in 3.3927627s: enabled=[storage-provisioner default-storageclass]
	I0229 19:13:25.387810    7948 node_ready.go:35] waiting up to 15m0s for node "bridge-652900" to be "Ready" ...
	I0229 19:13:25.398481    7948 node_ready.go:49] node "bridge-652900" has status "Ready":"True"
	I0229 19:13:25.398596    7948 node_ready.go:38] duration metric: took 10.4804ms waiting for node "bridge-652900" to be "Ready" ...
	I0229 19:13:25.398596    7948 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:13:25.417028    7948 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:28.097722   14472 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.5961108s)
	I0229 19:13:28.097722   14472 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.4930199s)
	I0229 19:13:28.097722   14472 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.1495841s)
	I0229 19:13:28.097722   14472 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0229 19:13:28.097722   14472 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.6404631s)
	I0229 19:13:28.115790   14472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" enable-default-cni-652900
	I0229 19:13:28.344161   14472 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-652900" to be "Ready" ...
	I0229 19:13:28.424012   14472 node_ready.go:49] node "enable-default-cni-652900" has status "Ready":"True"
	I0229 19:13:28.424119   14472 node_ready.go:38] duration metric: took 79.9573ms waiting for node "enable-default-cni-652900" to be "Ready" ...
	I0229 19:13:28.424195   14472 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:13:28.455459   14472 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-xt2b4" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:28.470475   14472 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 19:13:28.474499   14472 addons.go:505] enable addons completed in 4.9645682s: enabled=[storage-provisioner default-storageclass]
	I0229 19:13:30.478175   14472 pod_ready.go:92] pod "coredns-5dd5756b68-xt2b4" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.478252   14472 pod_ready.go:81] duration metric: took 2.0227782s waiting for pod "coredns-5dd5756b68-xt2b4" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.478252   14472 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.493222   14472 pod_ready.go:92] pod "etcd-enable-default-cni-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.493281   14472 pod_ready.go:81] duration metric: took 15.0285ms waiting for pod "etcd-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.493281   14472 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.507457   14472 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.507457   14472 pod_ready.go:81] duration metric: took 14.1759ms waiting for pod "kube-apiserver-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.507457   14472 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.523650   14472 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.523650   14472 pod_ready.go:81] duration metric: took 16.1928ms waiting for pod "kube-controller-manager-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.523650   14472 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-7fs6d" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.538278   14472 pod_ready.go:92] pod "kube-proxy-7fs6d" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.538278   14472 pod_ready.go:81] duration metric: took 14.628ms waiting for pod "kube-proxy-7fs6d" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.538278   14472 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.877653   14472 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.877653   14472 pod_ready.go:81] duration metric: took 339.3727ms waiting for pod "kube-scheduler-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.877653   14472 pod_ready.go:38] duration metric: took 2.4534397s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:13:30.877653   14472 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:13:30.893957   14472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:13:30.925352   14472 api_server.go:72] duration metric: took 6.0076026s to wait for apiserver process to appear ...
	I0229 19:13:30.925352   14472 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:13:30.925352   14472 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61235/healthz ...
	I0229 19:13:30.941376   14472 api_server.go:279] https://127.0.0.1:61235/healthz returned 200:
	ok
	I0229 19:13:30.945348   14472 api_server.go:141] control plane version: v1.28.4
	I0229 19:13:30.945348   14472 api_server.go:131] duration metric: took 19.9959ms to wait for apiserver health ...
	I0229 19:13:30.945348   14472 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:13:31.088416   14472 system_pods.go:59] 7 kube-system pods found
	I0229 19:13:31.088541   14472 system_pods.go:61] "coredns-5dd5756b68-xt2b4" [95171e04-c522-46e5-a759-27b5869d7d36] Running
	I0229 19:13:31.088541   14472 system_pods.go:61] "etcd-enable-default-cni-652900" [4ddbc764-adad-45c3-80e0-138cc7ef3f8c] Running
	I0229 19:13:31.088587   14472 system_pods.go:61] "kube-apiserver-enable-default-cni-652900" [047f3d07-34a0-424d-ad95-6c6d503db759] Running
	I0229 19:13:31.088587   14472 system_pods.go:61] "kube-controller-manager-enable-default-cni-652900" [6153d2a9-409e-4cd8-bc71-864e2d37041c] Running
	I0229 19:13:31.088587   14472 system_pods.go:61] "kube-proxy-7fs6d" [32abedad-6c34-4043-9e4e-524d2f955678] Running
	I0229 19:13:31.088587   14472 system_pods.go:61] "kube-scheduler-enable-default-cni-652900" [730eb52d-7be7-407f-a452-6d72396d7af2] Running
	I0229 19:13:31.088587   14472 system_pods.go:61] "storage-provisioner" [99f20e05-1657-4023-9d05-6c06d4ba3cf7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 19:13:31.088587   14472 system_pods.go:74] duration metric: took 143.2378ms to wait for pod list to return data ...
	I0229 19:13:31.088680   14472 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:13:31.269500   14472 default_sa.go:45] found service account: "default"
	I0229 19:13:31.269571   14472 default_sa.go:55] duration metric: took 180.8188ms for default service account to be created ...
	I0229 19:13:31.269571   14472 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:13:31.483245   14472 system_pods.go:86] 7 kube-system pods found
	I0229 19:13:31.483245   14472 system_pods.go:89] "coredns-5dd5756b68-xt2b4" [95171e04-c522-46e5-a759-27b5869d7d36] Running
	I0229 19:13:31.483245   14472 system_pods.go:89] "etcd-enable-default-cni-652900" [4ddbc764-adad-45c3-80e0-138cc7ef3f8c] Running
	I0229 19:13:31.483245   14472 system_pods.go:89] "kube-apiserver-enable-default-cni-652900" [047f3d07-34a0-424d-ad95-6c6d503db759] Running
	I0229 19:13:31.483338   14472 system_pods.go:89] "kube-controller-manager-enable-default-cni-652900" [6153d2a9-409e-4cd8-bc71-864e2d37041c] Running
	I0229 19:13:31.483338   14472 system_pods.go:89] "kube-proxy-7fs6d" [32abedad-6c34-4043-9e4e-524d2f955678] Running
	I0229 19:13:31.483338   14472 system_pods.go:89] "kube-scheduler-enable-default-cni-652900" [730eb52d-7be7-407f-a452-6d72396d7af2] Running
	I0229 19:13:31.483338   14472 system_pods.go:89] "storage-provisioner" [99f20e05-1657-4023-9d05-6c06d4ba3cf7] Running
	I0229 19:13:31.483338   14472 system_pods.go:126] duration metric: took 213.7647ms to wait for k8s-apps to be running ...
	I0229 19:13:31.483416   14472 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:13:31.497732   14472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:13:31.524180   14472 system_svc.go:56] duration metric: took 40.8415ms WaitForService to wait for kubelet.
	I0229 19:13:31.524221   14472 kubeadm.go:581] duration metric: took 6.6064669s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:13:31.524286   14472 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:13:31.678661   14472 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0229 19:13:31.678661   14472 node_conditions.go:123] node cpu capacity is 16
	I0229 19:13:31.678661   14472 node_conditions.go:105] duration metric: took 154.3731ms to run NodePressure ...
	I0229 19:13:31.678661   14472 start.go:228] waiting for startup goroutines ...
	I0229 19:13:31.678661   14472 start.go:233] waiting for cluster config update ...
	I0229 19:13:31.678661   14472 start.go:242] writing updated cluster config ...
	I0229 19:13:31.691140   14472 ssh_runner.go:195] Run: rm -f paused
	I0229 19:13:31.838239   14472 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:13:31.842047   14472 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-652900" cluster and "default" namespace by default
	I0229 19:13:27.471942    7948 pod_ready.go:102] pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:29.942886    7948 pod_ready.go:102] pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:32.443024    7948 pod_ready.go:102] pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:34.937307    7948 pod_ready.go:102] pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:36.940221    7948 pod_ready.go:102] pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:38.948090    7948 pod_ready.go:97] pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.85.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-02-29 19:13:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSta
teTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-02-29 19:13:27 +0000 UTC,FinishedAt:2024-02-29 19:13:37 +0000 UTC,ContainerID:docker://bca7169eab28926b840afa2f659355d18c4198796c896ac35344898736d3dfc8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://bca7169eab28926b840afa2f659355d18c4198796c896ac35344898736d3dfc8 Started:0xc002e748c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0229 19:13:38.948188    7948 pod_ready.go:81] duration metric: took 13.5310527s waiting for pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace to be "Ready" ...
	E0229 19:13:38.948188    7948 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.85.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-02-29 19:13:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running
:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-02-29 19:13:27 +0000 UTC,FinishedAt:2024-02-29 19:13:37 +0000 UTC,ContainerID:docker://bca7169eab28926b840afa2f659355d18c4198796c896ac35344898736d3dfc8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://bca7169eab28926b840afa2f659355d18c4198796c896ac35344898736d3dfc8 Started:0xc002e748c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0229 19:13:38.948272    7948 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-ppr7c" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:40.972198    7948 pod_ready.go:102] pod "coredns-5dd5756b68-ppr7c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:38.418046    2612 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-652900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (19.633482s)
	I0229 19:13:38.418046    2612 kic.go:203] duration metric: took 19.641515 seconds to extract preloaded images to volume
	I0229 19:13:38.435319    2612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 19:13:38.857095    2612 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:93 SystemTime:2024-02-29 19:13:38.816805162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 19:13:38.867079    2612 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0229 19:13:39.264013    2612 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-652900 --name kubenet-652900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-652900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-652900 --network kubenet-652900 --ip 192.168.76.2 --volume kubenet-652900:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0229 19:13:40.361162    2612 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-652900 --name kubenet-652900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-652900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-652900 --network kubenet-652900 --ip 192.168.76.2 --volume kubenet-652900:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08: (1.0961408s)
	I0229 19:13:40.379193    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Running}}
	I0229 19:13:40.659183    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Status}}
	I0229 19:13:40.922183    2612 cli_runner.go:164] Run: docker exec kubenet-652900 stat /var/lib/dpkg/alternatives/iptables
	I0229 19:13:41.268126    2612 oci.go:144] the created container "kubenet-652900" has a running status.
	I0229 19:13:41.268126    2612 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa...
	I0229 19:13:42.002876    2612 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0229 19:13:42.310877    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Status}}
	I0229 19:13:42.567865    2612 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0229 19:13:42.567865    2612 kic_runner.go:114] Args: [docker exec --privileged kubenet-652900 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0229 19:13:42.877874    2612 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa...
	I0229 19:13:42.978875    7948 pod_ready.go:102] pod "coredns-5dd5756b68-ppr7c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:43.971525    7948 pod_ready.go:92] pod "coredns-5dd5756b68-ppr7c" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:43.971525    7948 pod_ready.go:81] duration metric: took 5.0232109s waiting for pod "coredns-5dd5756b68-ppr7c" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:43.972527    7948 pod_ready.go:78] waiting up to 15m0s for pod "etcd-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:43.983529    7948 pod_ready.go:92] pod "etcd-bridge-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:43.983529    7948 pod_ready.go:81] duration metric: took 11.0015ms waiting for pod "etcd-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:43.983529    7948 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:43.998802    7948 pod_ready.go:92] pod "kube-apiserver-bridge-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:43.998802    7948 pod_ready.go:81] duration metric: took 15.2729ms waiting for pod "kube-apiserver-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:43.998880    7948 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:44.014735    7948 pod_ready.go:92] pod "kube-controller-manager-bridge-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:44.014735    7948 pod_ready.go:81] duration metric: took 15.855ms waiting for pod "kube-controller-manager-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:44.014735    7948 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-vgpqv" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:44.029717    7948 pod_ready.go:92] pod "kube-proxy-vgpqv" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:44.029717    7948 pod_ready.go:81] duration metric: took 14.9819ms waiting for pod "kube-proxy-vgpqv" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:44.029717    7948 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:44.374959    7948 pod_ready.go:92] pod "kube-scheduler-bridge-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:44.375004    7948 pod_ready.go:81] duration metric: took 345.2847ms waiting for pod "kube-scheduler-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:44.375004    7948 pod_ready.go:38] duration metric: took 18.9762565s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:13:44.375093    7948 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:13:44.390677    7948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:13:44.418678    7948 api_server.go:72] duration metric: took 20.9247862s to wait for apiserver process to appear ...
	I0229 19:13:44.418678    7948 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:13:44.418678    7948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61232/healthz ...
	I0229 19:13:44.432694    7948 api_server.go:279] https://127.0.0.1:61232/healthz returned 200:
	ok
	I0229 19:13:44.437672    7948 api_server.go:141] control plane version: v1.28.4
	I0229 19:13:44.437672    7948 api_server.go:131] duration metric: took 18.994ms to wait for apiserver health ...
	I0229 19:13:44.437672    7948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:13:44.584426    7948 system_pods.go:59] 7 kube-system pods found
	I0229 19:13:44.584426    7948 system_pods.go:61] "coredns-5dd5756b68-ppr7c" [b6dc096a-bd1c-495d-814c-97cf9df8c055] Running
	I0229 19:13:44.584426    7948 system_pods.go:61] "etcd-bridge-652900" [c7edf61c-e7dd-483b-b6ec-2c5cb356a38b] Running
	I0229 19:13:44.584426    7948 system_pods.go:61] "kube-apiserver-bridge-652900" [1c29d36d-e4e6-4de4-9d16-2883956e58b9] Running
	I0229 19:13:44.584426    7948 system_pods.go:61] "kube-controller-manager-bridge-652900" [1d57c399-a9bd-4838-811b-dca8932d15af] Running
	I0229 19:13:44.584426    7948 system_pods.go:61] "kube-proxy-vgpqv" [7654b9cc-c8d7-4041-8304-9f3e44ad85d4] Running
	I0229 19:13:44.584426    7948 system_pods.go:61] "kube-scheduler-bridge-652900" [b9a48085-e455-4c39-a6bb-99c5232898bb] Running
	I0229 19:13:44.584964    7948 system_pods.go:61] "storage-provisioner" [59049e0e-2666-4064-a7e5-7c286bd68480] Running
	I0229 19:13:44.584964    7948 system_pods.go:74] duration metric: took 147.2907ms to wait for pod list to return data ...
	I0229 19:13:44.584964    7948 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:13:44.766220    7948 default_sa.go:45] found service account: "default"
	I0229 19:13:44.766508    7948 default_sa.go:55] duration metric: took 181.5427ms for default service account to be created ...
	I0229 19:13:44.766605    7948 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:13:44.976238    7948 system_pods.go:86] 7 kube-system pods found
	I0229 19:13:44.976238    7948 system_pods.go:89] "coredns-5dd5756b68-ppr7c" [b6dc096a-bd1c-495d-814c-97cf9df8c055] Running
	I0229 19:13:44.976238    7948 system_pods.go:89] "etcd-bridge-652900" [c7edf61c-e7dd-483b-b6ec-2c5cb356a38b] Running
	I0229 19:13:44.976238    7948 system_pods.go:89] "kube-apiserver-bridge-652900" [1c29d36d-e4e6-4de4-9d16-2883956e58b9] Running
	I0229 19:13:44.976238    7948 system_pods.go:89] "kube-controller-manager-bridge-652900" [1d57c399-a9bd-4838-811b-dca8932d15af] Running
	I0229 19:13:44.976238    7948 system_pods.go:89] "kube-proxy-vgpqv" [7654b9cc-c8d7-4041-8304-9f3e44ad85d4] Running
	I0229 19:13:44.976238    7948 system_pods.go:89] "kube-scheduler-bridge-652900" [b9a48085-e455-4c39-a6bb-99c5232898bb] Running
	I0229 19:13:44.976238    7948 system_pods.go:89] "storage-provisioner" [59049e0e-2666-4064-a7e5-7c286bd68480] Running
	I0229 19:13:44.976238    7948 system_pods.go:126] duration metric: took 209.6314ms to wait for k8s-apps to be running ...
	I0229 19:13:44.976238    7948 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:13:44.988254    7948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:13:45.015171    7948 system_svc.go:56] duration metric: took 38.9327ms WaitForService to wait for kubelet.
	I0229 19:13:45.015171    7948 kubeadm.go:581] duration metric: took 21.5212748s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:13:45.015171    7948 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:13:45.175730    7948 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0229 19:13:45.175730    7948 node_conditions.go:123] node cpu capacity is 16
	I0229 19:13:45.175730    7948 node_conditions.go:105] duration metric: took 160.5579ms to run NodePressure ...
	I0229 19:13:45.175835    7948 start.go:228] waiting for startup goroutines ...
	I0229 19:13:45.175835    7948 start.go:233] waiting for cluster config update ...
	I0229 19:13:45.175835    7948 start.go:242] writing updated cluster config ...
	I0229 19:13:45.191186    7948 ssh_runner.go:195] Run: rm -f paused
	I0229 19:13:45.359309    7948 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:13:45.362773    7948 out.go:177] * Done! kubectl is now configured to use "bridge-652900" cluster and "default" namespace by default
	I0229 19:13:45.799031    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Status}}
	I0229 19:13:45.998215    2612 machine.go:88] provisioning docker machine ...
	I0229 19:13:45.998385    2612 ubuntu.go:169] provisioning hostname "kubenet-652900"
	I0229 19:13:46.012372    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:46.207363    2612 main.go:141] libmachine: Using SSH client type: native
	I0229 19:13:46.219385    2612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61320 <nil> <nil>}
	I0229 19:13:46.219385    2612 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubenet-652900 && echo "kubenet-652900" | sudo tee /etc/hostname
	I0229 19:13:46.421391    2612 main.go:141] libmachine: SSH cmd err, output: <nil>: kubenet-652900
	
	I0229 19:13:46.436584    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:46.600877    2612 main.go:141] libmachine: Using SSH client type: native
	I0229 19:13:46.600877    2612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61320 <nil> <nil>}
	I0229 19:13:46.600877    2612 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-652900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-652900/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-652900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 19:13:46.773469    2612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:13:46.773557    2612 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0229 19:13:46.773627    2612 ubuntu.go:177] setting up certificates
	I0229 19:13:46.773689    2612 provision.go:83] configureAuth start
	I0229 19:13:46.786260    2612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-652900
	I0229 19:13:46.982076    2612 provision.go:138] copyHostCerts
	I0229 19:13:46.982076    2612 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0229 19:13:46.982076    2612 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0229 19:13:46.982076    2612 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0229 19:13:46.984125    2612 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0229 19:13:46.984125    2612 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0229 19:13:46.985104    2612 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 19:13:46.986076    2612 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0229 19:13:46.986076    2612 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0229 19:13:46.987163    2612 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 19:13:46.988093    2612 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-652900 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-652900]
	I0229 19:13:47.239980    2612 provision.go:172] copyRemoteCerts
	I0229 19:13:47.250989    2612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 19:13:47.260006    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:47.471987    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:13:47.601640    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 19:13:47.652249    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 19:13:47.694499    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0229 19:13:47.741893    2612 provision.go:86] duration metric: configureAuth took 968.128ms
	I0229 19:13:47.741952    2612 ubuntu.go:193] setting minikube options for container-runtime
	I0229 19:13:47.742465    2612 config.go:182] Loaded profile config "kubenet-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:13:47.752547    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:47.964361    2612 main.go:141] libmachine: Using SSH client type: native
	I0229 19:13:47.964913    2612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61320 <nil> <nil>}
	I0229 19:13:47.964913    2612 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 19:13:48.147137    2612 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0229 19:13:48.147207    2612 ubuntu.go:71] root file system type: overlay
	I0229 19:13:48.147598    2612 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 19:13:48.165127    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:48.364211    2612 main.go:141] libmachine: Using SSH client type: native
	I0229 19:13:48.364211    2612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61320 <nil> <nil>}
	I0229 19:13:48.364211    2612 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 19:13:48.563981    2612 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 19:13:48.576329    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:48.768285    2612 main.go:141] libmachine: Using SSH client type: native
	I0229 19:13:48.768946    2612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61320 <nil> <nil>}
	I0229 19:13:48.769011    2612 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 19:13:50.333539    2612 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-29 19:13:48.545414330 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0229 19:13:50.333539    2612 machine.go:91] provisioned docker machine in 4.3352884s
	I0229 19:13:50.333539    2612 client.go:171] LocalClient.Create took 35.6123745s
	I0229 19:13:50.333539    2612 start.go:167] duration metric: libmachine.API.Create for "kubenet-652900" took 35.6123745s
	I0229 19:13:50.333539    2612 start.go:300] post-start starting for "kubenet-652900" (driver="docker")
	I0229 19:13:50.333539    2612 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 19:13:50.350517    2612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 19:13:50.367538    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:50.599524    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:13:50.772548    2612 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 19:13:50.784535    2612 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 19:13:50.784535    2612 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 19:13:50.784535    2612 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 19:13:50.784535    2612 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 19:13:50.784535    2612 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0229 19:13:50.785536    2612 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0229 19:13:50.788529    2612 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem -> 56602.pem in /etc/ssl/certs
	I0229 19:13:50.810525    2612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 19:13:50.836522    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /etc/ssl/certs/56602.pem (1708 bytes)
	I0229 19:13:50.886705    2612 start.go:303] post-start completed in 552.1719ms
	I0229 19:13:50.907695    2612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-652900
	I0229 19:13:51.140702    2612 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\config.json ...
	I0229 19:13:51.157686    2612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 19:13:51.170696    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:51.397696    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:13:51.556695    2612 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 19:13:51.568694    2612 start.go:128] duration metric: createHost completed in 36.8579404s
	I0229 19:13:51.568694    2612 start.go:83] releasing machines lock for "kubenet-652900", held for 36.8579404s
	I0229 19:13:51.577684    2612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-652900
	I0229 19:13:51.797695    2612 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 19:13:51.812706    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:51.812706    2612 ssh_runner.go:195] Run: cat /version.json
	I0229 19:13:51.832031    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:52.044041    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:13:52.059032    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:13:52.363008    2612 ssh_runner.go:195] Run: systemctl --version
	I0229 19:13:52.388009    2612 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 19:13:52.422008    2612 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0229 19:13:52.445051    2612 start.go:419] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0229 19:13:52.460012    2612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 19:13:52.548022    2612 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 19:13:52.548022    2612 start.go:475] detecting cgroup driver to use...
	I0229 19:13:52.548022    2612 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 19:13:52.549019    2612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:13:52.607010    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 19:13:52.652014    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 19:13:52.672010    2612 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 19:13:52.686244    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 19:13:52.746458    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:13:52.782063    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 19:13:52.833076    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:13:52.867055    2612 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 19:13:52.906101    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 19:13:52.942071    2612 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 19:13:52.967059    2612 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 19:13:53.011079    2612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:13:53.210476    2612 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 19:13:53.382013    2612 start.go:475] detecting cgroup driver to use...
	I0229 19:13:53.382113    2612 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 19:13:53.394816    2612 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 19:13:53.423148    2612 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0229 19:13:53.435852    2612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 19:13:53.462471    2612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:13:53.514251    2612 ssh_runner.go:195] Run: which cri-dockerd
	I0229 19:13:53.551030    2612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 19:13:53.568013    2612 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (193 bytes)
	I0229 19:13:53.633924    2612 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 19:13:53.791337    2612 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 19:13:53.943954    2612 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 19:13:53.944214    2612 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 19:13:53.987278    2612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:13:54.147094    2612 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 19:13:55.464411    2612 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.3173062s)
	I0229 19:13:55.478569    2612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 19:13:55.518584    2612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 19:13:55.555569    2612 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 19:13:55.740612    2612 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 19:13:55.934236    2612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:13:56.147139    2612 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 19:13:56.190765    2612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 19:13:56.230733    2612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:13:56.425544    2612 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 19:13:56.592799    2612 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 19:13:56.607787    2612 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 19:13:56.618803    2612 start.go:543] Will wait 60s for crictl version
	I0229 19:13:56.632786    2612 ssh_runner.go:195] Run: which crictl
	I0229 19:13:56.668647    2612 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 19:13:56.778647    2612 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0229 19:13:56.790619    2612 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:13:56.865585    2612 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:13:56.926712    2612 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0229 19:13:56.937717    2612 cli_runner.go:164] Run: docker exec -t kubenet-652900 dig +short host.docker.internal
	I0229 19:13:57.246510    2612 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0229 19:13:57.259573    2612 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0229 19:13:57.274453    2612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:13:57.305447    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:57.476401    2612 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:13:57.486559    2612 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 19:13:57.537158    2612 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 19:13:57.537260    2612 docker.go:615] Images already preloaded, skipping extraction
	I0229 19:13:57.545703    2612 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 19:13:57.594686    2612 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 19:13:57.594777    2612 cache_images.go:84] Images are preloaded, skipping loading
	I0229 19:13:57.607995    2612 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 19:13:57.735844    2612 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0229 19:13:57.735844    2612 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 19:13:57.735844    2612 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-652900 NodeName:kubenet-652900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 19:13:57.736564    2612 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-652900"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 19:13:57.736756    2612 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubenet-652900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:kubenet-652900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 19:13:57.750093    2612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 19:13:57.767026    2612 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 19:13:57.779031    2612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 19:13:57.800011    2612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (400 bytes)
	I0229 19:13:57.832777    2612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 19:13:57.867951    2612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0229 19:13:57.920606    2612 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0229 19:13:57.934968    2612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:13:57.963548    2612 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900 for IP: 192.168.76.2
	I0229 19:13:57.963548    2612 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:57.963548    2612 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0229 19:13:57.964547    2612 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0229 19:13:57.965544    2612 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.key
	I0229 19:13:57.965544    2612 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt with IP's: []
	I0229 19:13:58.523802    2612 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt ...
	I0229 19:13:58.523802    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: {Name:mkee2c5fdf95fdcd5db1cd86782d60f0b9c24b77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:58.526067    2612 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.key ...
	I0229 19:13:58.526152    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.key: {Name:mkec6556a9f6406d9a4ea13eed590466865d0b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:58.527431    2612 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.key.31bdca25
	I0229 19:13:58.527477    2612 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 19:13:58.901885    2612 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.crt.31bdca25 ...
	I0229 19:13:58.901885    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.crt.31bdca25: {Name:mk375451f66bc7990b7c902a22148a16103489d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:58.903909    2612 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.key.31bdca25 ...
	I0229 19:13:58.903909    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.key.31bdca25: {Name:mke9d8ca8a25586f609361d99a26cb8aa6f8a1ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:58.904240    2612 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.crt.31bdca25 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.crt
	I0229 19:13:58.916037    2612 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.key.31bdca25 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.key
	I0229 19:13:58.917036    2612 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.key
	I0229 19:13:58.917036    2612 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.crt with IP's: []
	I0229 19:13:59.049913    2612 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.crt ...
	I0229 19:13:59.049913    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.crt: {Name:mk3cb52698f22961884ce5c5423d4c57ac536599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:59.050560    2612 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.key ...
	I0229 19:13:59.051563    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.key: {Name:mk8305f3a6a959f2fcfbf796deb5bb3bd454352f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:59.066147    2612 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem (1338 bytes)
	W0229 19:13:59.066501    2612 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660_empty.pem, impossibly tiny 0 bytes
	I0229 19:13:59.066501    2612 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0229 19:13:59.066847    2612 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0229 19:13:59.067223    2612 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 19:13:59.067223    2612 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 19:13:59.067738    2612 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem (1708 bytes)
	I0229 19:13:59.069733    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 19:13:59.124686    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 19:13:59.167425    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 19:13:59.206764    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 19:13:59.249774    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 19:13:59.290644    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 19:13:59.332784    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 19:13:59.383817    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 19:13:59.429790    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 19:13:59.472072    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem --> /usr/share/ca-certificates/5660.pem (1338 bytes)
	I0229 19:13:59.514749    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /usr/share/ca-certificates/56602.pem (1708 bytes)
	I0229 19:13:59.562316    2612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 19:13:59.607237    2612 ssh_runner.go:195] Run: openssl version
	I0229 19:13:59.633559    2612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56602.pem && ln -fs /usr/share/ca-certificates/56602.pem /etc/ssl/certs/56602.pem"
	I0229 19:13:59.662657    2612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56602.pem
	I0229 19:13:59.674717    2612 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:50 /usr/share/ca-certificates/56602.pem
	I0229 19:13:59.685887    2612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56602.pem
	I0229 19:13:59.714711    2612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56602.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 19:13:59.747421    2612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 19:13:59.775336    2612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:13:59.786331    2612 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:13:59.797561    2612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:13:59.826615    2612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 19:13:59.861926    2612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5660.pem && ln -fs /usr/share/ca-certificates/5660.pem /etc/ssl/certs/5660.pem"
	I0229 19:13:59.895051    2612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5660.pem
	I0229 19:13:59.906595    2612 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:50 /usr/share/ca-certificates/5660.pem
	I0229 19:13:59.918340    2612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5660.pem
	I0229 19:13:59.947694    2612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5660.pem /etc/ssl/certs/51391683.0"
	I0229 19:13:59.980095    2612 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 19:13:59.992598    2612 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 19:13:59.992598    2612 kubeadm.go:404] StartCluster: {Name:kubenet-652900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-652900 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:14:00.004318    2612 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 19:14:00.064601    2612 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 19:14:00.095887    2612 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:14:00.116118    2612 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 19:14:00.126192    2612 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:14:00.145733    2612 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:14:00.145733    2612 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 19:14:00.327835    2612 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0229 19:14:00.490607    2612 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:14:19.354361    2612 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 19:14:19.354361    2612 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:14:19.354361    2612 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:14:19.354361    2612 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:14:19.355350    2612 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:14:19.355350    2612 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:14:19.359424    2612 out.go:204]   - Generating certificates and keys ...
	I0229 19:14:19.359424    2612 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:14:19.359424    2612 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:14:19.360356    2612 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 19:14:19.360356    2612 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 19:14:19.360356    2612 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 19:14:19.360356    2612 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 19:14:19.361362    2612 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 19:14:19.361362    2612 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubenet-652900 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0229 19:14:19.361362    2612 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 19:14:19.362360    2612 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubenet-652900 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0229 19:14:19.362360    2612 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 19:14:19.362360    2612 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 19:14:19.362360    2612 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 19:14:19.362360    2612 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:14:19.362360    2612 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:14:19.362360    2612 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:14:19.363362    2612 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:14:19.363362    2612 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:14:19.363362    2612 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:14:19.363362    2612 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:14:19.366352    2612 out.go:204]   - Booting up control plane ...
	I0229 19:14:19.366352    2612 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:14:19.366352    2612 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:14:19.367362    2612 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:14:19.367362    2612 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:14:19.367362    2612 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:14:19.367362    2612 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:14:19.368365    2612 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:14:19.368365    2612 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.522477 seconds
	I0229 19:14:19.368365    2612 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:14:19.369350    2612 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:14:19.369350    2612 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:14:19.369350    2612 kubeadm.go:322] [mark-control-plane] Marking the node kubenet-652900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:14:19.369350    2612 kubeadm.go:322] [bootstrap-token] Using token: dddyg5.8xck91887j66rhnf
	I0229 19:14:19.373356    2612 out.go:204]   - Configuring RBAC rules ...
	I0229 19:14:19.373356    2612 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:14:19.373356    2612 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:14:19.374360    2612 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:14:19.375378    2612 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:14:19.375378    2612 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:14:19.375378    2612 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:14:19.376357    2612 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:14:19.376357    2612 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:14:19.376357    2612 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:14:19.376357    2612 kubeadm.go:322] 
	I0229 19:14:19.376357    2612 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:14:19.376357    2612 kubeadm.go:322] 
	I0229 19:14:19.376357    2612 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:14:19.376357    2612 kubeadm.go:322] 
	I0229 19:14:19.377351    2612 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:14:19.377351    2612 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:14:19.377351    2612 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:14:19.377351    2612 kubeadm.go:322] 
	I0229 19:14:19.377351    2612 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:14:19.377351    2612 kubeadm.go:322] 
	I0229 19:14:19.377351    2612 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:14:19.378373    2612 kubeadm.go:322] 
	I0229 19:14:19.378373    2612 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:14:19.378373    2612 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:14:19.379356    2612 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:14:19.379356    2612 kubeadm.go:322] 
	I0229 19:14:19.379356    2612 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:14:19.379356    2612 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:14:19.379356    2612 kubeadm.go:322] 
	I0229 19:14:19.379356    2612 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dddyg5.8xck91887j66rhnf \
	I0229 19:14:19.379356    2612 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:80eb25c1c6cbd8ac057f190e22b9147f2ead62b31e10db2e8c638577512ad3fe \
	I0229 19:14:19.379356    2612 kubeadm.go:322] 	--control-plane 
	I0229 19:14:19.380355    2612 kubeadm.go:322] 
	I0229 19:14:19.380355    2612 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:14:19.380355    2612 kubeadm.go:322] 
	I0229 19:14:19.380355    2612 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dddyg5.8xck91887j66rhnf \
	I0229 19:14:19.380355    2612 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:80eb25c1c6cbd8ac057f190e22b9147f2ead62b31e10db2e8c638577512ad3fe 
	I0229 19:14:19.380355    2612 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0229 19:14:19.380355    2612 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:14:19.402354    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=kubenet-652900 minikube.k8s.io/updated_at=2024_02_29T19_14_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:19.402354    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:19.403357    2612 ops.go:34] apiserver oom_adj: -16
	I0229 19:14:20.245287    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:20.753175    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:21.244919    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:21.751126    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:22.248621    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:22.749928    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:23.256705    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:23.744510    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:24.256825    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:24.755716    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:25.265790    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:25.755576    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:26.257068    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:26.756699    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:27.257545    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:27.749199    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:28.251577    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:28.758878    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:29.255489    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:29.758118    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:30.257221    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:31.117825    2612 kubeadm.go:1088] duration metric: took 11.7373731s to wait for elevateKubeSystemPrivileges.
	I0229 19:14:31.117825    2612 kubeadm.go:406] StartCluster complete in 31.1249709s
	I0229 19:14:31.117825    2612 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:14:31.117825    2612 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 19:14:31.120843    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:14:31.122834    2612 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:14:31.122834    2612 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:14:31.122834    2612 addons.go:69] Setting storage-provisioner=true in profile "kubenet-652900"
	I0229 19:14:31.122834    2612 addons.go:234] Setting addon storage-provisioner=true in "kubenet-652900"
	I0229 19:14:31.122834    2612 addons.go:69] Setting default-storageclass=true in profile "kubenet-652900"
	I0229 19:14:31.122834    2612 host.go:66] Checking if "kubenet-652900" exists ...
	I0229 19:14:31.122834    2612 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-652900"
	I0229 19:14:31.122834    2612 config.go:182] Loaded profile config "kubenet-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:14:31.150821    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Status}}
	I0229 19:14:31.151818    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Status}}
	W0229 19:14:31.299834    2612 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "kubenet-652900" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0229 19:14:31.300834    2612 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0229 19:14:31.300834    2612 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 19:14:31.305839    2612 out.go:177] * Verifying Kubernetes components...
	I0229 19:14:31.330826    2612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:14:31.364840    2612 addons.go:234] Setting addon default-storageclass=true in "kubenet-652900"
	I0229 19:14:31.364840    2612 host.go:66] Checking if "kubenet-652900" exists ...
	I0229 19:14:31.378831    2612 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:14:31.381842    2612 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:14:31.381842    2612 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:14:31.394819    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Status}}
	I0229 19:14:31.399830    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:14:31.589831    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:14:31.604828    2612 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:14:31.604828    2612 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:14:31.619827    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:14:31.798848    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:14:31.891515    2612 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:14:31.909695    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:14:32.008678    2612 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:14:32.145670    2612 node_ready.go:35] waiting up to 15m0s for node "kubenet-652900" to be "Ready" ...
	I0229 19:14:32.189555    2612 node_ready.go:49] node "kubenet-652900" has status "Ready":"True"
	I0229 19:14:32.189555    2612 node_ready.go:38] duration metric: took 43.8841ms waiting for node "kubenet-652900" to be "Ready" ...
	I0229 19:14:32.190160    2612 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:14:32.213455    2612 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace to be "Ready" ...
	I0229 19:14:32.217343    2612 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:14:34.295963    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:36.317580    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:36.490883    2612 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.5993305s)
	I0229 19:14:36.490883    2612 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0229 19:14:36.880572    2612 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.6631903s)
	I0229 19:14:36.880572    2612 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.8718538s)
	I0229 19:14:36.909572    2612 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 19:14:36.912553    2612 addons.go:505] enable addons completed in 5.7896712s: enabled=[storage-provisioner default-storageclass]
	I0229 19:14:38.754895    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:41.233671    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:43.236985    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:45.249590    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:47.734437    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:49.738864    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:51.740358    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:53.741414    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:56.310956    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:58.732892    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:00.734690    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:02.740366    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:05.234073    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:07.242407    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:09.246915    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:11.791908    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:12.235948    2612 pod_ready.go:92] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:12.235948    2612 pod_ready.go:81] duration metric: took 40.0221652s waiting for pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:12.235948    2612 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-n9887" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:14.255629    2612 pod_ready.go:102] pod "coredns-5dd5756b68-n9887" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:16.276814    2612 pod_ready.go:102] pod "coredns-5dd5756b68-n9887" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:18.762378    2612 pod_ready.go:102] pod "coredns-5dd5756b68-n9887" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:20.767367    2612 pod_ready.go:102] pod "coredns-5dd5756b68-n9887" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:22.267101    2612 pod_ready.go:92] pod "coredns-5dd5756b68-n9887" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:22.267101    2612 pod_ready.go:81] duration metric: took 10.0310706s waiting for pod "coredns-5dd5756b68-n9887" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.267101    2612 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.280015    2612 pod_ready.go:92] pod "etcd-kubenet-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:22.280015    2612 pod_ready.go:81] duration metric: took 12.9144ms waiting for pod "etcd-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.280015    2612 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.290624    2612 pod_ready.go:92] pod "kube-apiserver-kubenet-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:22.290624    2612 pod_ready.go:81] duration metric: took 10.6086ms waiting for pod "kube-apiserver-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.290624    2612 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.301842    2612 pod_ready.go:92] pod "kube-controller-manager-kubenet-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:22.301842    2612 pod_ready.go:81] duration metric: took 11.2176ms waiting for pod "kube-controller-manager-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.301842    2612 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-f78bb" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.316255    2612 pod_ready.go:92] pod "kube-proxy-f78bb" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:22.316255    2612 pod_ready.go:81] duration metric: took 14.4132ms waiting for pod "kube-proxy-f78bb" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.316255    2612 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.666846    2612 pod_ready.go:92] pod "kube-scheduler-kubenet-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:22.666846    2612 pod_ready.go:81] duration metric: took 350.5879ms waiting for pod "kube-scheduler-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.666973    2612 pod_ready.go:38] duration metric: took 50.4763995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:15:22.666973    2612 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:15:22.680455    2612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:15:22.707548    2612 api_server.go:72] duration metric: took 51.4062923s to wait for apiserver process to appear ...
	I0229 19:15:22.708559    2612 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:15:22.708559    2612 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61319/healthz ...
	I0229 19:15:22.724191    2612 api_server.go:279] https://127.0.0.1:61319/healthz returned 200:
	ok
	I0229 19:15:22.729601    2612 api_server.go:141] control plane version: v1.28.4
	I0229 19:15:22.729601    2612 api_server.go:131] duration metric: took 21.0421ms to wait for apiserver health ...
	I0229 19:15:22.729601    2612 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:15:22.878982    2612 system_pods.go:59] 8 kube-system pods found
	I0229 19:15:22.879228    2612 system_pods.go:61] "coredns-5dd5756b68-4bbhp" [d4f7e92e-7989-4dc7-926b-ffe4535d3777] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "coredns-5dd5756b68-n9887" [7fd56343-3f9f-4561-8ec8-70b32fb4f72b] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "etcd-kubenet-652900" [e60858b1-e4fe-4ec9-b9e6-0621f60cd9d9] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "kube-apiserver-kubenet-652900" [3d940450-6611-48d3-b390-3df5a9a0b5eb] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "kube-controller-manager-kubenet-652900" [9dc2da02-9ef9-40ae-9bfb-bc21477a0f51] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "kube-proxy-f78bb" [20d02d18-4fa1-4852-b217-fb7193effb23] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "kube-scheduler-kubenet-652900" [7fb4828b-3376-4b94-bde0-1cd982dbefb1] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "storage-provisioner" [d23b50f2-9878-47dc-8185-56087fe44a01] Running
	I0229 19:15:22.879228    2612 system_pods.go:74] duration metric: took 149.6258ms to wait for pod list to return data ...
	I0229 19:15:22.879287    2612 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:15:23.060507    2612 default_sa.go:45] found service account: "default"
	I0229 19:15:23.060620    2612 default_sa.go:55] duration metric: took 181.3317ms for default service account to be created ...
	I0229 19:15:23.060620    2612 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:15:23.272885    2612 system_pods.go:86] 8 kube-system pods found
	I0229 19:15:23.272885    2612 system_pods.go:89] "coredns-5dd5756b68-4bbhp" [d4f7e92e-7989-4dc7-926b-ffe4535d3777] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "coredns-5dd5756b68-n9887" [7fd56343-3f9f-4561-8ec8-70b32fb4f72b] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "etcd-kubenet-652900" [e60858b1-e4fe-4ec9-b9e6-0621f60cd9d9] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "kube-apiserver-kubenet-652900" [3d940450-6611-48d3-b390-3df5a9a0b5eb] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "kube-controller-manager-kubenet-652900" [9dc2da02-9ef9-40ae-9bfb-bc21477a0f51] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "kube-proxy-f78bb" [20d02d18-4fa1-4852-b217-fb7193effb23] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "kube-scheduler-kubenet-652900" [7fb4828b-3376-4b94-bde0-1cd982dbefb1] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "storage-provisioner" [d23b50f2-9878-47dc-8185-56087fe44a01] Running
	I0229 19:15:23.272885    2612 system_pods.go:126] duration metric: took 212.2631ms to wait for k8s-apps to be running ...
	I0229 19:15:23.272885    2612 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:15:23.285498    2612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:15:23.308782    2612 system_svc.go:56] duration metric: took 35.8969ms WaitForService to wait for kubelet.
	I0229 19:15:23.308782    2612 kubeadm.go:581] duration metric: took 52.007522s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:15:23.308782    2612 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:15:23.471411    2612 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0229 19:15:23.471411    2612 node_conditions.go:123] node cpu capacity is 16
	I0229 19:15:23.471411    2612 node_conditions.go:105] duration metric: took 162.6278ms to run NodePressure ...
	I0229 19:15:23.471526    2612 start.go:228] waiting for startup goroutines ...
	I0229 19:15:23.471526    2612 start.go:233] waiting for cluster config update ...
	I0229 19:15:23.471526    2612 start.go:242] writing updated cluster config ...
	I0229 19:15:23.483867    2612 ssh_runner.go:195] Run: rm -f paused
	I0229 19:15:23.630356    2612 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:15:23.634795    2612 out.go:177] * Done! kubectl is now configured to use "kubenet-652900" cluster and "default" namespace by default
	I0229 19:16:05.094764    3012 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:16:05.095215    3012 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:16:05.099756    3012 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:16:05.100306    3012 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:16:05.100596    3012 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:16:05.100925    3012 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:16:05.100925    3012 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:16:05.101471    3012 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:16:05.101798    3012 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:16:05.101940    3012 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:16:05.102008    3012 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:16:05.107021    3012 out.go:204]   - Generating certificates and keys ...
	I0229 19:16:05.107311    3012 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:16:05.107636    3012 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:16:05.107851    3012 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:16:05.108092    3012 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:16:05.108199    3012 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:16:05.108199    3012 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:16:05.108199    3012 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:16:05.108852    3012 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:16:05.109114    3012 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:16:05.109396    3012 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:16:05.109536    3012 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:16:05.109963    3012 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:16:05.110112    3012 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:16:05.110409    3012 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:16:05.110687    3012 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:16:05.110835    3012 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:16:05.111381    3012 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:16:05.113856    3012 out.go:204]   - Booting up control plane ...
	I0229 19:16:05.114066    3012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:16:05.114066    3012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:16:05.114066    3012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:16:05.115027    3012 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:16:05.116054    3012 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:16:05.116054    3012 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:16:05.116054    3012 kubeadm.go:322] 
	I0229 19:16:05.116054    3012 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:16:05.116054    3012 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:16:05.116054    3012 kubeadm.go:322] 
	I0229 19:16:05.116751    3012 kubeadm.go:322] This error is likely caused by:
	I0229 19:16:05.116871    3012 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:16:05.116979    3012 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:16:05.116979    3012 kubeadm.go:322] 
	I0229 19:16:05.117821    3012 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:16:05.118155    3012 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:16:05.118181    3012 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:16:05.118181    3012 kubeadm.go:322] 
	I0229 19:16:05.118181    3012 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:16:05.118181    3012 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:16:05.118968    3012 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:16:05.119090    3012 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:16:05.119329    3012 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:16:05.119561    3012 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 19:16:05.119651    3012 kubeadm.go:406] StartCluster complete in 12m31.0799276s
	I0229 19:16:05.129772    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:16:05.171665    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.171665    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:16:05.179666    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:16:05.229862    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.229862    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:16:05.249214    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:16:05.307193    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.307193    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:16:05.316151    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:16:05.362653    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.362653    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:16:05.370651    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:16:05.435306    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.435363    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:16:05.446405    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:16:05.497155    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.497695    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:16:05.510707    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:16:05.553447    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.553447    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:16:05.563373    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:16:05.608587    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.608587    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:16:05.608587    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:16:05.608587    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:16:05.762137    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:16:05.762137    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:16:05.762137    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:16:05.794822    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:16:05.794822    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:16:05.881247    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:16:05.881247    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:16:05.936905    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:44 old-k8s-version-718400 kubelet[11316]: E0229 19:15:44.334761   11316 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:16:05.943928    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:47 old-k8s-version-718400 kubelet[11316]: E0229 19:15:47.331645   11316 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:16:05.955775    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:52 old-k8s-version-718400 kubelet[11316]: E0229 19:15:52.364501   11316 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:16:05.959784    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:54 old-k8s-version-718400 kubelet[11316]: E0229 19:15:54.343799   11316 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:16:05.965770    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:56 old-k8s-version-718400 kubelet[11316]: E0229 19:15:56.333115   11316 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:16:05.979274    3012 logs.go:138] Found kubelet problem: Feb 29 19:16:02 old-k8s-version-718400 kubelet[11316]: E0229 19:16:02.344608   11316 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:16:05.982275    3012 logs.go:138] Found kubelet problem: Feb 29 19:16:03 old-k8s-version-718400 kubelet[11316]: E0229 19:16:03.341406   11316 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0229 19:16:05.988288    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:16:05.988288    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0229 19:16:06.015433    3012 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 19:16:06.015433    3012 out.go:239] * 
	W0229 19:16:06.015433    3012 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:16:06.015433    3012 out.go:239] * 
	W0229 19:16:06.017885    3012 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 19:16:06.022944    3012 out.go:177] X Problems detected in kubelet:
	I0229 19:16:06.028715    3012 out.go:177]   Feb 29 19:15:44 old-k8s-version-718400 kubelet[11316]: E0229 19:15:44.334761   11316 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:16:06.033657    3012 out.go:177]   Feb 29 19:15:47 old-k8s-version-718400 kubelet[11316]: E0229 19:15:47.331645   11316 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:16:06.038773    3012 out.go:177]   Feb 29 19:15:52 old-k8s-version-718400 kubelet[11316]: E0229 19:15:52.364501   11316 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0229 19:16:06.045519    3012 out.go:177] 
	W0229 19:16:06.047748    3012 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:16:06.047748    3012 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 19:16:06.047748    3012 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 19:16:06.051267    3012 out.go:177] 
	
	
	==> Docker <==
	Feb 29 19:03:23 old-k8s-version-718400 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 19:03:23 old-k8s-version-718400 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 19:03:23 old-k8s-version-718400 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 19:03:23 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:23.908449437Z" level=info msg="Starting up"
	Feb 29 19:03:25 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:25.453074169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 19:03:29 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:29.617627017Z" level=info msg="Loading containers: start."
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.067844531Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.886492123Z" level=info msg="Loading containers: done."
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.963282056Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.963370460Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.963427362Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.963436363Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.963528467Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.963595770Z" level=info msg="Daemon has completed initialization"
	Feb 29 19:03:31 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:31.037611678Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 19:03:31 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:31.037784386Z" level=info msg="API listen on [::]:2376"
	Feb 29 19:03:31 old-k8s-version-718400 systemd[1]: Started Docker Application Container Engine.
	Feb 29 19:07:53 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:07:53.877388254Z" level=info msg="ignoring event" container=00dcd44138414fbd8965f28b931eb93164beb21430ec662545b929dfe6822dcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:07:54 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:07:54.431453799Z" level=info msg="ignoring event" container=58acdae9ec28479642033ce37d3db918ab22684a2dafc6360486626387f00593 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:07:54 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:07:54.715596600Z" level=info msg="ignoring event" container=7cb8ba66f44be22d5926c76ced15a7c22abc36ed1844c239bd119d2df8cf1bfa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:07:55 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:07:55.002448473Z" level=info msg="ignoring event" container=7623f76bfea3ea27f4702817b0a177403ec6ad37f0ad9034d57a14d6d76bbe5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:11:59 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:11:59.168499080Z" level=info msg="ignoring event" container=286f5c24dc54e22c277c2d7533deee004574c731c5a32688673184742035419f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:11:59 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:11:59.498058343Z" level=info msg="ignoring event" container=ab1416acf165ce195fbbab48c0bf328d249009519a6a04e143342678dc5615a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:11:59 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:11:59.948431317Z" level=info msg="ignoring event" container=07bad9d32f3c93a5778f334aed71e422983f5ddb923821f3a504c2ed84f9ff1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:12:01 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:12:01.183444154Z" level=info msg="ignoring event" container=1af61f1ac1c706a76384505a3e4aeedf60474c1ab76fee0df282eadc631a1d2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 19:16:10 up  3:21,  0 users,  load average: 3.06, 5.69, 5.61
	Linux old-k8s-version-718400 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 29 19:16:08 old-k8s-version-718400 kubelet[11316]: E0229 19:16:08.999560   11316 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.073406   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.173936   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.201546   11316 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.274506   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.375323   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.400433   11316 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.475965   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.576716   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: I0229 19:16:09.588961   11316 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.606568   11316 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-718400&limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: I0229 19:16:09.630679   11316 kubelet_node_status.go:72] Attempting to register node old-k8s-version-718400
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.677456   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.778883   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.787838   11316 kubelet_node_status.go:94] Unable to register node "old-k8s-version-718400" with API server: Post https://control-plane.minikube.internal:8443/api/v1/nodes: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.879591   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.980277   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:09 old-k8s-version-718400 kubelet[11316]: E0229 19:16:09.988074   11316 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-718400&limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:16:10 old-k8s-version-718400 kubelet[11316]: E0229 19:16:10.080935   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:10 old-k8s-version-718400 kubelet[11316]: E0229 19:16:10.181375   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:10 old-k8s-version-718400 kubelet[11316]: E0229 19:16:10.189892   11316 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:16:10 old-k8s-version-718400 kubelet[11316]: E0229 19:16:10.281987   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:10 old-k8s-version-718400 kubelet[11316]: E0229 19:16:10.382544   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:16:10 old-k8s-version-718400 kubelet[11316]: E0229 19:16:10.390279   11316 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:16:10 old-k8s-version-718400 kubelet[11316]: E0229 19:16:10.483119   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:16:08.717812    7792 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-718400 -n old-k8s-version-718400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-718400 -n old-k8s-version-718400: exit status 2 (1.3403141s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:16:11.172236    7440 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-718400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (804.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
net_test.go:112: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p flannel-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: exit status 85 (1m5.4449466s)

                                                
                                                
-- stdout --
	* [flannel-652900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node flannel-652900 in cluster flannel-652900
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* Stopping node "flannel-652900"  ...
	* Powering off "flannel-652900" via SSH ...
	* Deleting "flannel-652900" in docker ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:11:51.002805    1552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 19:11:51.088950    1552 out.go:291] Setting OutFile to fd 1692 ...
	I0229 19:11:51.090070    1552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:11:51.090070    1552 out.go:304] Setting ErrFile to fd 1536...
	I0229 19:11:51.090070    1552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:11:51.115977    1552 out.go:298] Setting JSON to false
	I0229 19:11:51.120551    1552 start.go:129] hostinfo: {"hostname":"minikube7","uptime":11871,"bootTime":1709222039,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 19:11:51.120551    1552 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 19:11:51.149897    1552 out.go:177] * [flannel-652900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 19:11:51.159758    1552 notify.go:220] Checking for updates...
	I0229 19:11:51.163282    1552 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 19:11:51.170291    1552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:11:51.177948    1552 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 19:11:51.187696    1552 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:11:51.232710    1552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:11:51.238057    1552 config.go:182] Loaded profile config "custom-flannel-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:11:51.238057    1552 config.go:182] Loaded profile config "enable-default-cni-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:11:51.238885    1552 config.go:182] Loaded profile config "old-k8s-version-718400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 19:11:51.238885    1552 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:11:51.563392    1552 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 19:11:51.573399    1552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 19:11:52.017604    1552 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:101 SystemTime:2024-02-29 19:11:51.952399552 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 In
dexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profi
le=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendo
r:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warni
ngs:<nil>}}
	I0229 19:11:52.029420    1552 out.go:177] * Using the docker driver based on user configuration
	I0229 19:11:52.037948    1552 start.go:299] selected driver: docker
	I0229 19:11:52.037948    1552 start.go:903] validating driver "docker" against <nil>
	I0229 19:11:52.037948    1552 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:11:52.109187    1552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 19:11:52.543955    1552 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:91 SystemTime:2024-02-29 19:11:52.504197307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 19:11:52.544476    1552 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 19:11:52.545866    1552 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 19:11:52.550485    1552 out.go:177] * Using Docker Desktop driver with root privileges
	I0229 19:11:52.553962    1552 cni.go:84] Creating CNI manager for "flannel"
	I0229 19:11:52.554016    1552 start_flags.go:318] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0229 19:11:52.554016    1552 start_flags.go:323] config:
	{Name:flannel-652900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-652900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:11:52.557831    1552 out.go:177] * Starting control plane node flannel-652900 in cluster flannel-652900
	I0229 19:11:52.563316    1552 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 19:11:52.566261    1552 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 19:11:52.572866    1552 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:11:52.572866    1552 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 19:11:52.572866    1552 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 19:11:52.572866    1552 cache.go:56] Caching tarball of preloaded images
	I0229 19:11:52.572866    1552 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 19:11:52.572866    1552 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 19:11:52.574106    1552 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-652900\config.json ...
	I0229 19:11:52.574250    1552 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\flannel-652900\config.json: {Name:mk9eeb3ea2f64b84f6976cf637131d89382ceb82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:11:52.761931    1552 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 19:11:52.761994    1552 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 19:11:52.761994    1552 cache.go:194] Successfully downloaded all kic artifacts
	I0229 19:11:52.762051    1552 start.go:365] acquiring machines lock for flannel-652900: {Name:mkfc5016029d4182972692343307fb9adb0f0484 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:11:52.762216    1552 start.go:369] acquired machines lock for "flannel-652900" in 75.2µs
	I0229 19:11:52.762554    1552 start.go:93] Provisioning new machine with config: &{Name:flannel-652900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-652900 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 19:11:52.762949    1552 start.go:125] createHost starting for "" (driver="docker")
	I0229 19:11:52.772201    1552 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0229 19:11:52.772890    1552 start.go:159] libmachine.API.Create for "flannel-652900" (driver="docker")
	I0229 19:11:52.773059    1552 client.go:168] LocalClient.Create starting
	I0229 19:11:52.773790    1552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0229 19:11:52.773790    1552 main.go:141] libmachine: Decoding PEM data...
	I0229 19:11:52.773790    1552 main.go:141] libmachine: Parsing certificate...
	I0229 19:11:52.774588    1552 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0229 19:11:52.775019    1552 main.go:141] libmachine: Decoding PEM data...
	I0229 19:11:52.775019    1552 main.go:141] libmachine: Parsing certificate...
	I0229 19:11:52.794961    1552 cli_runner.go:164] Run: docker network inspect flannel-652900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 19:11:53.024793    1552 cli_runner.go:211] docker network inspect flannel-652900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 19:11:53.042225    1552 network_create.go:281] running [docker network inspect flannel-652900] to gather additional debugging logs...
	I0229 19:11:53.042258    1552 cli_runner.go:164] Run: docker network inspect flannel-652900
	W0229 19:11:53.241263    1552 cli_runner.go:211] docker network inspect flannel-652900 returned with exit code 1
	I0229 19:11:53.241331    1552 network_create.go:284] error running [docker network inspect flannel-652900]: docker network inspect flannel-652900: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-652900 not found
	I0229 19:11:53.241331    1552 network_create.go:286] output of [docker network inspect flannel-652900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-652900 not found
	
	** /stderr **
	I0229 19:11:53.259799    1552 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 19:11:53.522280    1552 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 19:11:53.552882    1552 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 19:11:53.583713    1552 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 19:11:53.605979    1552 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023b9470}
	I0229 19:11:53.606076    1552 network_create.go:124] attempt to create docker network flannel-652900 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0229 19:11:53.616238    1552 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-652900 flannel-652900
	I0229 19:11:54.009141    1552 network_create.go:108] docker network flannel-652900 192.168.76.0/24 created
	I0229 19:11:54.009312    1552 kic.go:121] calculated static IP "192.168.76.2" for the "flannel-652900" container
	I0229 19:11:54.045431    1552 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 19:11:54.300608    1552 cli_runner.go:164] Run: docker volume create flannel-652900 --label name.minikube.sigs.k8s.io=flannel-652900 --label created_by.minikube.sigs.k8s.io=true
	I0229 19:11:54.628113    1552 oci.go:103] Successfully created a docker volume flannel-652900
	I0229 19:11:54.639682    1552 cli_runner.go:164] Run: docker run --rm --name flannel-652900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-652900 --entrypoint /usr/bin/test -v flannel-652900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 19:11:57.572893    1552 cli_runner.go:217] Completed: docker run --rm --name flannel-652900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-652900 --entrypoint /usr/bin/test -v flannel-652900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (2.9331515s)
	I0229 19:11:57.572893    1552 oci.go:107] Successfully prepared a docker volume flannel-652900
	I0229 19:11:57.572893    1552 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:11:57.572893    1552 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 19:11:57.586694    1552 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v flannel-652900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 19:12:29.026331    1552 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v flannel-652900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (31.4393792s)
	I0229 19:12:29.026331    1552 kic.go:203] duration metric: took 31.453180 seconds to extract preloaded images to volume
	I0229 19:12:29.037052    1552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 19:12:29.760561    1552 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:86 SystemTime:2024-02-29 19:12:29.688981645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 19:12:29.776462    1552 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0229 19:12:30.548501    1552 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-652900 --name flannel-652900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-652900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-652900 --network flannel-652900 --ip 192.168.76.2 --volume flannel-652900:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0229 19:12:33.234189    1552 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-652900 --name flannel-652900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-652900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-652900 --network flannel-652900 --ip 192.168.76.2 --volume flannel-652900:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08: (2.6856112s)
	I0229 19:12:33.255495    1552 cli_runner.go:164] Run: docker container inspect flannel-652900 --format={{.State.Running}}
	I0229 19:12:33.554527    1552 cli_runner.go:164] Run: docker container inspect flannel-652900 --format={{.State.Status}}
	I0229 19:12:33.839982    1552 cli_runner.go:164] Run: docker exec flannel-652900 stat /var/lib/dpkg/alternatives/iptables
	I0229 19:12:34.213885    1552 oci.go:144] the created container "flannel-652900" has a running status.
	I0229 19:12:34.214068    1552 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\flannel-652900\id_rsa...
	I0229 19:12:34.375502    1552 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\flannel-652900\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0229 19:12:34.745915    1552 cli_runner.go:164] Run: docker container inspect flannel-652900 --format={{.State.Status}}
	I0229 19:12:35.032089    1552 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0229 19:12:35.032089    1552 kic_runner.go:114] Args: [docker exec --privileged flannel-652900 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0229 19:12:35.451673    1552 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\flannel-652900\id_rsa...
	I0229 19:12:38.465612    1552 cli_runner.go:164] Run: docker container inspect flannel-652900 --format={{.State.Status}}
	I0229 19:12:38.677956    1552 machine.go:88] provisioning docker machine ...
	I0229 19:12:38.678496    1552 ubuntu.go:169] provisioning hostname "flannel-652900"
	I0229 19:12:38.689632    1552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-652900
	I0229 19:12:38.899216    1552 main.go:141] libmachine: Using SSH client type: native
	I0229 19:12:38.911677    1552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61240 <nil> <nil>}
	I0229 19:12:38.911742    1552 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-652900 && echo "flannel-652900" | sudo tee /etc/hostname
	I0229 19:12:39.138197    1552 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-652900
	
	I0229 19:12:39.155529    1552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-652900
	I0229 19:12:39.393424    1552 main.go:141] libmachine: Using SSH client type: native
	I0229 19:12:39.394234    1552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61240 <nil> <nil>}
	I0229 19:12:39.394395    1552 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-652900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-652900/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-652900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 19:12:39.608936    1552 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:12:39.608936    1552 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0229 19:12:39.608936    1552 ubuntu.go:177] setting up certificates
	I0229 19:12:39.608936    1552 provision.go:83] configureAuth start
	I0229 19:12:39.622953    1552 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-652900
	I0229 19:12:39.866774    1552 provision.go:138] copyHostCerts
	I0229 19:12:39.867626    1552 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0229 19:12:39.867626    1552 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0229 19:12:39.868162    1552 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0229 19:12:39.869578    1552 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0229 19:12:39.869692    1552 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0229 19:12:39.870085    1552 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 19:12:39.871319    1552 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0229 19:12:39.871372    1552 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0229 19:12:39.877169    1552 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 19:12:39.879020    1552 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.flannel-652900 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube flannel-652900]
	I0229 19:12:40.078875    1552 provision.go:172] copyRemoteCerts
	I0229 19:12:40.093057    1552 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 19:12:40.103733    1552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-652900
	I0229 19:12:40.283427    1552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61240 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\flannel-652900\id_rsa Username:docker}
	I0229 19:12:40.413844    1552 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 19:12:40.488951    1552 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 19:12:40.542784    1552 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 19:12:40.597363    1552 provision.go:86] duration metric: configureAuth took 988.2727ms
	I0229 19:12:40.597420    1552 ubuntu.go:193] setting minikube options for container-runtime
	I0229 19:12:40.597933    1552 config.go:182] Loaded profile config "flannel-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:12:40.612028    1552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-652900
	I0229 19:12:40.819711    1552 main.go:141] libmachine: Using SSH client type: native
	I0229 19:12:40.820680    1552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61240 <nil> <nil>}
	I0229 19:12:40.820680    1552 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 19:12:41.016730    1552 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0229 19:12:41.016797    1552 ubuntu.go:71] root file system type: overlay
	I0229 19:12:41.017071    1552 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 19:12:41.034387    1552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-652900
	I0229 19:12:41.287627    1552 main.go:141] libmachine: Using SSH client type: native
	I0229 19:12:41.287627    1552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61240 <nil> <nil>}
	I0229 19:12:41.288260    1552 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 19:12:41.535097    1552 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 19:12:41.546624    1552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-652900
	I0229 19:12:41.736099    1552 main.go:141] libmachine: Using SSH client type: native
	I0229 19:12:41.736382    1552 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61240 <nil> <nil>}
	I0229 19:12:41.736382    1552 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 19:12:42.672596    1552 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-29 19:12:41.515019842 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	I0229 19:12:42.672596    1552 ubuntu.go:195] Error setting container-runtime options during provisioning ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-29 19:12:41.515019842 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0229 19:12:42.672596    1552 machine.go:91] provisioned docker machine in 3.9940679s
	I0229 19:12:42.672596    1552 client.go:171] LocalClient.Create took 49.8990389s
	I0229 19:12:44.689495    1552 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 19:12:44.697860    1552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-652900
	I0229 19:12:44.945753    1552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61240 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\flannel-652900\id_rsa Username:docker}
	I0229 19:12:45.097212    1552 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 19:12:45.111580    1552 start.go:128] duration metric: createHost completed in 52.3482016s
	I0229 19:12:45.111580    1552 start.go:83] releasing machines lock for "flannel-652900", held for 52.3488173s
	W0229 19:12:45.111580    1552 start.go:694] error starting host: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-29 19:12:41.515019842 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0229 19:12:45.134484    1552 cli_runner.go:164] Run: docker container inspect flannel-652900 --format={{.State.Status}}
	I0229 19:12:45.331970    1552 stop.go:39] StopHost: flannel-652900
	W0229 19:12:45.332330    1552 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0229 19:12:45.337073    1552 out.go:177] * Stopping node "flannel-652900"  ...
	I0229 19:12:45.363716    1552 cli_runner.go:164] Run: docker container inspect flannel-652900 --format={{.State.Status}}
	W0229 19:12:45.560538    1552 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0229 19:12:45.563265    1552 out.go:177] * Powering off "flannel-652900" via SSH ...
	I0229 19:12:45.577071    1552 cli_runner.go:164] Run: docker exec --privileged -t flannel-652900 /bin/bash -c "sudo init 0"
	I0229 19:12:46.955504    1552 cli_runner.go:164] Run: docker container inspect flannel-652900 --format={{.State.Status}}
	I0229 19:12:47.162280    1552 oci.go:658] container flannel-652900 status is Stopped
	I0229 19:12:47.162280    1552 oci.go:670] Successfully shutdown container flannel-652900
	I0229 19:12:47.162280    1552 stop.go:88] shutdown container: err=<nil>
	I0229 19:12:47.162280    1552 main.go:141] libmachine: Stopping "flannel-652900"...
	I0229 19:12:47.194334    1552 cli_runner.go:164] Run: docker container inspect flannel-652900 --format={{.State.Status}}
	I0229 19:12:47.402228    1552 stop.go:59] stop err: Machine "flannel-652900" is already stopped.
	I0229 19:12:47.402228    1552 stop.go:62] host is already stopped
	W0229 19:12:48.411695    1552 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0229 19:12:48.415740    1552 out.go:177] * Deleting "flannel-652900" in docker ...
	I0229 19:12:48.428128    1552 cli_runner.go:164] Run: docker container inspect -f {{.Id}} flannel-652900
	I0229 19:12:48.626125    1552 cli_runner.go:164] Run: docker container inspect flannel-652900 --format={{.State.Status}}
	I0229 19:12:48.820864    1552 cli_runner.go:164] Run: docker exec --privileged -t flannel-652900 /bin/bash -c "sudo init 0"
	W0229 19:12:49.020232    1552 cli_runner.go:211] docker exec --privileged -t flannel-652900 /bin/bash -c "sudo init 0" returned with exit code 1
	I0229 19:12:49.020232    1552 oci.go:650] error shutdown flannel-652900: docker exec --privileged -t flannel-652900 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 982b25b1021e1335716aaadc39da12d8bed99b329316f663701069d7c808c60a is not running
	I0229 19:12:50.036808    1552 cli_runner.go:164] Run: docker container inspect flannel-652900 --format={{.State.Status}}
	I0229 19:12:50.234149    1552 oci.go:658] container flannel-652900 status is Stopped
	I0229 19:12:50.234267    1552 oci.go:670] Successfully shutdown container flannel-652900
	I0229 19:12:50.248661    1552 cli_runner.go:164] Run: docker rm -f -v flannel-652900
	I0229 19:12:50.485723    1552 cli_runner.go:164] Run: docker container inspect -f {{.Id}} flannel-652900
	W0229 19:12:50.668194    1552 cli_runner.go:211] docker container inspect -f {{.Id}} flannel-652900 returned with exit code 1
	I0229 19:12:50.678570    1552 cli_runner.go:164] Run: docker network inspect flannel-652900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 19:12:50.885268    1552 cli_runner.go:164] Run: docker network rm flannel-652900
	W0229 19:12:51.291330    1552 start.go:699] delete host: api remove: remove C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\flannel-652900\id_rsa: The process cannot access the file because it is being used by another process.
	W0229 19:12:51.291598    1552 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-29 19:12:41.515019842 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-29 19:12:41.515019842 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	I0229 19:12:51.291900    1552 start.go:709] Will try again in 5 seconds ...
	I0229 19:12:56.308370    1552 start.go:365] acquiring machines lock for flannel-652900: {Name:mkfc5016029d4182972692343307fb9adb0f0484 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:12:56.308741    1552 start.go:369] acquired machines lock for "flannel-652900" in 236.1µs
	I0229 19:12:56.308903    1552 start.go:96] Skipping create...Using existing machine configuration
	I0229 19:12:56.308903    1552 fix.go:54] fixHost starting: 
	I0229 19:12:56.308903    1552 fix.go:56] fixHost completed within 0s
	I0229 19:12:56.308903    1552 start.go:83] releasing machines lock for "flannel-652900", held for 162.6µs
	W0229 19:12:56.309750    1552 out.go:239] * Failed to start docker container. Running "minikube delete -p flannel-652900" may fix it: error loading existing host. Please try running [minikube delete], then run [minikube start] again: filestore "flannel-652900": open C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\flannel-652900\config.json: The system cannot find the file specified.
	* Failed to start docker container. Running "minikube delete -p flannel-652900" may fix it: error loading existing host. Please try running [minikube delete], then run [minikube start] again: filestore "flannel-652900": open C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\flannel-652900\config.json: The system cannot find the file specified.
	I0229 19:12:56.314559    1552 out.go:177] 
	W0229 19:12:56.316705    1552 out.go:239] X Exiting due to GUEST_NOT_FOUND: Failed to start host: error loading existing host. Please try running [minikube delete], then run [minikube start] again: filestore "flannel-652900": open C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\flannel-652900\config.json: The system cannot find the file specified.
	X Exiting due to GUEST_NOT_FOUND: Failed to start host: error loading existing host. Please try running [minikube delete], then run [minikube start] again: filestore "flannel-652900": open C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\flannel-652900\config.json: The system cannot find the file specified.
	W0229 19:12:56.316705    1552 out.go:239] * Suggestion: minikube is missing files relating to your guest environment. This can be fixed by running 'minikube delete'
	* Suggestion: minikube is missing files relating to your guest environment. This can be fixed by running 'minikube delete'
	W0229 19:12:56.316705    1552 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/9130
	* Related issue: https://github.com/kubernetes/minikube/issues/9130
	I0229 19:12:56.320047    1552 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 85
--- FAIL: TestNetworkPlugins/group/flannel/Start (65.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (322.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0229 19:16:17.916354    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:16:24.506485    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-652900\client.crt: The system cannot find the path specified.
E0229 19:16:24.692007    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-652900\client.crt: The system cannot find the path specified.
E0229 19:16:25.390653    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-652900\client.crt: The system cannot find the path specified.
E0229 19:16:29.381459    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-653000\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:16:52.343082    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:16:58.514193    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:17:39.852127    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:17:46.619009    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-652900\client.crt: The system cannot find the path specified.
E0229 19:17:47.319658    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:18:33.769786    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
E0229 19:18:33.784412    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
E0229 19:18:33.799894    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
E0229 19:18:33.831115    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
E0229 19:18:33.877504    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
E0229 19:18:33.957780    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
E0229 19:18:34.131118    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
E0229 19:18:34.465316    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
E0229 19:18:35.120144    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:18:36.410612    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
E0229 19:18:38.975316    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
E0229 19:18:44.104326    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:18:47.320542    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
E0229 19:18:47.335748    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
E0229 19:18:47.350747    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
E0229 19:18:47.382566    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
E0229 19:18:47.430326    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
E0229 19:18:47.524635    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
E0229 19:18:47.685064    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
E0229 19:18:48.016985    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
E0229 19:18:48.668847    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
E0229 19:18:49.963954    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
E0229 19:18:52.527896    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
E0229 19:18:54.347570    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:18:57.659937    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:19:07.906450    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
E0229 19:19:14.828531    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:19:21.529386    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:19:28.396863    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:19:41.154499    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:19:55.799928    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
E0229 19:19:55.879173    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:20:02.651420    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-652900\client.crt: The system cannot find the path specified.
E0229 19:20:03.352159    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:20:09.357658    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:20:23.700661    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-652900\client.crt: The system cannot find the path specified.
E0229 19:20:25.469082    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
E0229 19:20:25.484264    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
E0229 19:20:25.499783    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
E0229 19:20:25.532166    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
E0229 19:20:25.579668    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
E0229 19:20:25.674068    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
E0229 19:20:25.850182    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
E0229 19:20:26.185388    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
E0229 19:20:26.839184    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:20:28.127747    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
E0229 19:20:30.472357    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\custom-flannel-652900\client.crt: The system cannot find the path specified.
E0229 19:20:30.695665    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
E0229 19:20:31.169708    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-652900\client.crt: The system cannot find the path specified.
E0229 19:20:33.498140    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-652900\client.crt: The system cannot find the path specified.
E0229 19:20:35.829781    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:20:44.718571    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
E0229 19:20:46.074929    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:21:00.646802    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-653000\client.crt: The system cannot find the path specified.
E0229 19:21:06.556456    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0229 19:21:17.732123    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\enable-default-cni-652900\client.crt: The system cannot find the path specified.
E0229 19:21:24.520880    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-652900\client.crt: The system cannot find the path specified.
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:60236/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-718400 -n old-k8s-version-718400
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-718400 -n old-k8s-version-718400: exit status 2 (1.1573768s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:21:28.922736    3628 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-718400" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-718400
helpers_test.go:235: (dbg) docker inspect old-k8s-version-718400:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95",
	        "Created": "2024-02-29T18:51:53.042271456Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T19:02:50.267481317Z",
	            "FinishedAt": "2024-02-29T19:02:44.626053594Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/hostname",
	        "HostsPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/hosts",
	        "LogPath": "/var/lib/docker/containers/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95/12e46b2d6b8fa188ce0cbeabc001f5003dea2e2a18e8cd466468ca20a42e9b95-json.log",
	        "Name": "/old-k8s-version-718400",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-718400:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-718400",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e-init/diff:/var/lib/docker/overlay2/93b520212bad25395214c0a2a80384ead8baa0a1e04ab69f20509c9ef347fcc7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b9e2189636547096a553b762afaef19e75c59cef118f7aa52d78c7f494d9a0e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-718400",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-718400/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-718400",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-718400",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-718400",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5b08a179fc6ee320f3dde5003178e9a65d1c408322d65542c8d1770780903a6d",
	            "SandboxKey": "/var/run/docker/netns/5b08a179fc6e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60232"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60233"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60234"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60235"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60236"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-718400": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "12e46b2d6b8f",
	                        "old-k8s-version-718400"
	                    ],
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "NetworkID": "b75bbe82b7c3705bcc35a14b3795bdbd848e1be9ef602ed5c81af9b5c594adc5",
	                    "EndpointID": "38b8832480fb82510fe0ee90cee267741a9d04a575de6d049eb25622afbbf6ed",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-718400",
	                        "12e46b2d6b8f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400
E0229 19:21:31.279591    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-652900\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400: exit status 2 (1.146964s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:21:30.284886    1788 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-718400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-718400 logs -n 25: (1.7085497s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | iptables -t nat -L -n -v                             |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | systemctl status kubelet --all                       |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | systemctl cat kubelet                                |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | journalctl -xeu kubelet --all                        |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo cat                           | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo cat                           | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | systemctl status docker --all                        |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | systemctl cat docker                                 |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo cat                           | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | /etc/docker/daemon.json                              |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo docker                        | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | system info                                          |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | systemctl status cri-docker                          |                |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | systemctl cat cri-docker                             |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo cat                           | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo cat                           | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | cri-dockerd --version                                |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | systemctl status containerd                          |                |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | systemctl cat containerd                             |                |                   |         |                     |                     |
	|         | --no-pager                                           |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo cat                           | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | /lib/systemd/system/containerd.service               |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo cat                           | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | /etc/containerd/config.toml                          |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | containerd config dump                               |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC |                     |
	|         | systemctl status crio --all                          |                |                   |         |                     |                     |
	|         | --full --no-pager                                    |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo                               | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | systemctl cat crio --no-pager                        |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo find                          | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |                   |         |                     |                     |
	| ssh     | -p kubenet-652900 sudo crio                          | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|         | config                                               |                |                   |         |                     |                     |
	| delete  | -p kubenet-652900                                    | kubenet-652900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC | 29 Feb 24 19:16 UTC |
	|---------|------------------------------------------------------|----------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 19:13:13
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 19:13:13.114953    2612 out.go:291] Setting OutFile to fd 1580 ...
	I0229 19:13:13.114953    2612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:13:13.115937    2612 out.go:304] Setting ErrFile to fd 1056...
	I0229 19:13:13.115937    2612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:13:13.146477    2612 out.go:298] Setting JSON to false
	I0229 19:13:13.150293    2612 start.go:129] hostinfo: {"hostname":"minikube7","uptime":11953,"bootTime":1709222039,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 19:13:13.150293    2612 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 19:13:13.160115    2612 out.go:177] * [kubenet-652900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 19:13:13.165656    2612 notify.go:220] Checking for updates...
	I0229 19:13:13.170396    2612 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 19:13:13.173955    2612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:13:13.178382    2612 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 19:13:13.185388    2612 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:13:13.191391    2612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:13:13.195418    2612 config.go:182] Loaded profile config "bridge-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:13:13.196386    2612 config.go:182] Loaded profile config "enable-default-cni-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:13:13.196386    2612 config.go:182] Loaded profile config "old-k8s-version-718400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 19:13:13.196386    2612 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:13:13.508755    2612 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 19:13:13.519462    2612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 19:13:13.913922    2612 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:93 SystemTime:2024-02-29 19:13:13.872451135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 19:13:13.917011    2612 out.go:177] * Using the docker driver based on user configuration
	I0229 19:13:13.923712    2612 start.go:299] selected driver: docker
	I0229 19:13:13.923712    2612 start.go:903] validating driver "docker" against <nil>
	I0229 19:13:13.923793    2612 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:13:14.015077    2612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 19:13:14.447807    2612 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:93 SystemTime:2024-02-29 19:13:14.38915421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0229 19:13:14.448913    2612 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 19:13:14.453412    2612 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 19:13:14.458417    2612 out.go:177] * Using Docker Desktop driver with root privileges
	I0229 19:13:14.461396    2612 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0229 19:13:14.461396    2612 start_flags.go:323] config:
	{Name:kubenet-652900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-652900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:13:14.466501    2612 out.go:177] * Starting control plane node kubenet-652900 in cluster kubenet-652900
	I0229 19:13:14.471913    2612 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 19:13:14.476676    2612 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 19:13:14.483392    2612 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:13:14.483392    2612 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 19:13:14.483392    2612 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 19:13:14.483392    2612 cache.go:56] Caching tarball of preloaded images
	I0229 19:13:14.484389    2612 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 19:13:14.484389    2612 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 19:13:14.484389    2612 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\config.json ...
	I0229 19:13:14.484389    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\config.json: {Name:mk2050ee8b43989e221b5fd49b0f1b7245551d63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:14.709795    2612 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 19:13:14.709851    2612 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 19:13:14.709959    2612 cache.go:194] Successfully downloaded all kic artifacts
	I0229 19:13:14.709959    2612 start.go:365] acquiring machines lock for kubenet-652900: {Name:mk79ba672a2aee00cda4bd47db1909e85635aa9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:13:14.710463    2612 start.go:369] acquired machines lock for "kubenet-652900" in 503.6µs
	I0229 19:13:14.710463    2612 start.go:93] Provisioning new machine with config: &{Name:kubenet-652900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-652900 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 19:13:14.710463    2612 start.go:125] createHost starting for "" (driver="docker")
	I0229 19:13:10.111596   14472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:13:10.131123   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=enable-default-cni-652900 minikube.k8s.io/updated_at=2024_02_29T19_13_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:10.132051   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:10.986069   14472 ops.go:34] apiserver oom_adj: -16
	I0229 19:13:11.006886   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:11.506093   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:12.003892   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:12.505744   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:13.006338   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:13.516251   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:14.015077   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:14.522305   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:15.009431   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:12.115411    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:12.617769    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:13.109940    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:13.608490    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:14.109831    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:14.614045    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:15.118067    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:15.623488    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:16.118778    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:16.617128    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:14.720884    2612 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0229 19:13:14.720884    2612 start.go:159] libmachine.API.Create for "kubenet-652900" (driver="docker")
	I0229 19:13:14.720884    2612 client.go:168] LocalClient.Create starting
	I0229 19:13:14.721781    2612 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0229 19:13:14.721781    2612 main.go:141] libmachine: Decoding PEM data...
	I0229 19:13:14.721781    2612 main.go:141] libmachine: Parsing certificate...
	I0229 19:13:14.721781    2612 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0229 19:13:14.721781    2612 main.go:141] libmachine: Decoding PEM data...
	I0229 19:13:14.721781    2612 main.go:141] libmachine: Parsing certificate...
	I0229 19:13:14.732778    2612 cli_runner.go:164] Run: docker network inspect kubenet-652900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 19:13:14.908726    2612 cli_runner.go:211] docker network inspect kubenet-652900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 19:13:14.918355    2612 network_create.go:281] running [docker network inspect kubenet-652900] to gather additional debugging logs...
	I0229 19:13:14.918458    2612 cli_runner.go:164] Run: docker network inspect kubenet-652900
	W0229 19:13:15.095868    2612 cli_runner.go:211] docker network inspect kubenet-652900 returned with exit code 1
	I0229 19:13:15.095962    2612 network_create.go:284] error running [docker network inspect kubenet-652900]: docker network inspect kubenet-652900: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-652900 not found
	I0229 19:13:15.096018    2612 network_create.go:286] output of [docker network inspect kubenet-652900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-652900 not found
	
	** /stderr **
	I0229 19:13:15.107891    2612 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 19:13:15.320691    2612 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 19:13:15.351530    2612 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 19:13:15.383291    2612 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 19:13:15.404839    2612 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023a7200}
	I0229 19:13:15.404839    2612 network_create.go:124] attempt to create docker network kubenet-652900 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0229 19:13:15.412881    2612 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-652900 kubenet-652900
	I0229 19:13:15.717904    2612 network_create.go:108] docker network kubenet-652900 192.168.76.0/24 created
	I0229 19:13:15.718057    2612 kic.go:121] calculated static IP "192.168.76.2" for the "kubenet-652900" container
	I0229 19:13:15.739887    2612 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 19:13:15.930917    2612 cli_runner.go:164] Run: docker volume create kubenet-652900 --label name.minikube.sigs.k8s.io=kubenet-652900 --label created_by.minikube.sigs.k8s.io=true
	I0229 19:13:16.120778    2612 oci.go:103] Successfully created a docker volume kubenet-652900
	I0229 19:13:16.129771    2612 cli_runner.go:164] Run: docker run --rm --name kubenet-652900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-652900 --entrypoint /usr/bin/test -v kubenet-652900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 19:13:15.504908   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:16.007404   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:16.506464   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:17.008405   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:17.509333   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:18.014649   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:18.501539   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:19.002733   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:19.507820   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:20.005877   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:17.129975    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:17.607209    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:18.109391    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:18.612090    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:19.119804    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:19.618529    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:20.121893    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:20.623118    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:21.116510    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:21.613786    7948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:21.820704    7948 kubeadm.go:1088] duration metric: took 12.123241s to wait for elevateKubeSystemPrivileges.
	I0229 19:13:21.820704    7948 kubeadm.go:406] StartCluster complete in 29.1205832s
	I0229 19:13:21.820704    7948 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:21.821717    7948 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 19:13:21.823696    7948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:21.824699    7948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:13:21.824699    7948 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:13:21.824699    7948 addons.go:69] Setting storage-provisioner=true in profile "bridge-652900"
	I0229 19:13:21.824699    7948 addons.go:69] Setting default-storageclass=true in profile "bridge-652900"
	I0229 19:13:21.825718    7948 addons.go:234] Setting addon storage-provisioner=true in "bridge-652900"
	I0229 19:13:21.825718    7948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-652900"
	I0229 19:13:21.825718    7948 host.go:66] Checking if "bridge-652900" exists ...
	I0229 19:13:21.825718    7948 config.go:182] Loaded profile config "bridge-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:13:21.853287    7948 cli_runner.go:164] Run: docker container inspect bridge-652900 --format={{.State.Status}}
	I0229 19:13:21.854304    7948 cli_runner.go:164] Run: docker container inspect bridge-652900 --format={{.State.Status}}
	I0229 19:13:22.080475    7948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:13:22.086225    7948 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:13:22.086225    7948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:13:22.101285    7948 addons.go:234] Setting addon default-storageclass=true in "bridge-652900"
	I0229 19:13:22.102281    7948 host.go:66] Checking if "bridge-652900" exists ...
	I0229 19:13:18.776378    2612 cli_runner.go:217] Completed: docker run --rm --name kubenet-652900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-652900 --entrypoint /usr/bin/test -v kubenet-652900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (2.6465875s)
	I0229 19:13:18.776378    2612 oci.go:107] Successfully prepared a docker volume kubenet-652900
	I0229 19:13:18.776378    2612 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:13:18.776378    2612 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 19:13:18.784411    2612 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-652900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 19:13:22.107311    7948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-652900
	I0229 19:13:22.136294    7948 cli_runner.go:164] Run: docker container inspect bridge-652900 --format={{.State.Status}}
	I0229 19:13:22.322083    7948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61228 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-652900\id_rsa Username:docker}
	I0229 19:13:22.368078    7948 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:13:22.368078    7948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:13:22.378084    7948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-652900
	I0229 19:13:22.493315    7948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:13:22.589201    7948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61228 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\bridge-652900\id_rsa Username:docker}
	I0229 19:13:22.833831    7948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:13:23.493584    7948 kapi.go:248] "coredns" deployment in "kube-system" namespace and "bridge-652900" context rescaled to 1 replicas
	I0229 19:13:23.493676    7948 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 19:13:23.507884    7948 out.go:177] * Verifying Kubernetes components...
	I0229 19:13:23.505892    7948 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.6811803s)
	I0229 19:13:20.510033   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:21.023547   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:21.505737   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:22.017024   14472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:23.500875   14472 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.4838401s)
	I0229 19:13:23.500875   14472 kubeadm.go:1088] duration metric: took 13.3881769s to wait for elevateKubeSystemPrivileges.
	I0229 19:13:23.500875   14472 kubeadm.go:406] StartCluster complete in 31.0104072s
	I0229 19:13:23.500875   14472 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:23.500875   14472 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 19:13:23.507884   14472 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:23.509894   14472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:13:23.509894   14472 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:13:23.509894   14472 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-652900"
	I0229 19:13:23.509894   14472 addons.go:234] Setting addon storage-provisioner=true in "enable-default-cni-652900"
	I0229 19:13:23.510900   14472 host.go:66] Checking if "enable-default-cni-652900" exists ...
	I0229 19:13:23.510900   14472 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-652900"
	I0229 19:13:23.510900   14472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-652900"
	I0229 19:13:23.510900   14472 config.go:182] Loaded profile config "enable-default-cni-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:13:23.536885   14472 cli_runner.go:164] Run: docker container inspect enable-default-cni-652900 --format={{.State.Status}}
	I0229 19:13:23.537877   14472 cli_runner.go:164] Run: docker container inspect enable-default-cni-652900 --format={{.State.Status}}
	I0229 19:13:23.756407   14472 addons.go:234] Setting addon default-storageclass=true in "enable-default-cni-652900"
	I0229 19:13:23.756530   14472 host.go:66] Checking if "enable-default-cni-652900" exists ...
	I0229 19:13:23.776475   14472 cli_runner.go:164] Run: docker container inspect enable-default-cni-652900 --format={{.State.Status}}
	I0229 19:13:23.926146   14472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:13:24.085413   14472 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:13:24.085509   14472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:13:24.105407   14472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-652900
	I0229 19:13:24.128905   14472 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:13:24.128905   14472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:13:24.139921   14472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-652900
	I0229 19:13:24.306546   14472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61236 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\enable-default-cni-652900\id_rsa Username:docker}
	I0229 19:13:24.348742   14472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61236 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\enable-default-cni-652900\id_rsa Username:docker}
	I0229 19:13:24.457232   14472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:13:24.501584   14472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:13:24.604676   14472 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.0947737s)
	I0229 19:13:24.604676   14472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:13:24.917705   14472 kapi.go:248] "coredns" deployment in "kube-system" namespace and "enable-default-cni-652900" context rescaled to 1 replicas
	I0229 19:13:24.917705   14472 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 19:13:24.925429   14472 out.go:177] * Verifying Kubernetes components...
	I0229 19:13:24.948114   14472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:13:23.507884    7948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:13:23.528883    7948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:13:25.164385    7948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.3305369s)
	I0229 19:13:25.164385    7948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.6710504s)
	I0229 19:13:25.164385    7948 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.6534727s)
	I0229 19:13:25.164385    7948 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0229 19:13:25.164385    7948 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.6354895s)
	I0229 19:13:25.177214    7948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" bridge-652900
	I0229 19:13:25.212476    7948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 19:13:25.217487    7948 addons.go:505] enable addons completed in 3.3927627s: enabled=[storage-provisioner default-storageclass]
	I0229 19:13:25.387810    7948 node_ready.go:35] waiting up to 15m0s for node "bridge-652900" to be "Ready" ...
	I0229 19:13:25.398481    7948 node_ready.go:49] node "bridge-652900" has status "Ready":"True"
	I0229 19:13:25.398596    7948 node_ready.go:38] duration metric: took 10.4804ms waiting for node "bridge-652900" to be "Ready" ...
	I0229 19:13:25.398596    7948 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:13:25.417028    7948 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:28.097722   14472 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.5961108s)
	I0229 19:13:28.097722   14472 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.4930199s)
	I0229 19:13:28.097722   14472 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.1495841s)
	I0229 19:13:28.097722   14472 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0229 19:13:28.097722   14472 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.6404631s)
	I0229 19:13:28.115790   14472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" enable-default-cni-652900
	I0229 19:13:28.344161   14472 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-652900" to be "Ready" ...
	I0229 19:13:28.424012   14472 node_ready.go:49] node "enable-default-cni-652900" has status "Ready":"True"
	I0229 19:13:28.424119   14472 node_ready.go:38] duration metric: took 79.9573ms waiting for node "enable-default-cni-652900" to be "Ready" ...
	I0229 19:13:28.424195   14472 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:13:28.455459   14472 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-xt2b4" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:28.470475   14472 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 19:13:28.474499   14472 addons.go:505] enable addons completed in 4.9645682s: enabled=[storage-provisioner default-storageclass]
	I0229 19:13:30.478175   14472 pod_ready.go:92] pod "coredns-5dd5756b68-xt2b4" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.478252   14472 pod_ready.go:81] duration metric: took 2.0227782s waiting for pod "coredns-5dd5756b68-xt2b4" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.478252   14472 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.493222   14472 pod_ready.go:92] pod "etcd-enable-default-cni-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.493281   14472 pod_ready.go:81] duration metric: took 15.0285ms waiting for pod "etcd-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.493281   14472 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.507457   14472 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.507457   14472 pod_ready.go:81] duration metric: took 14.1759ms waiting for pod "kube-apiserver-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.507457   14472 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.523650   14472 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.523650   14472 pod_ready.go:81] duration metric: took 16.1928ms waiting for pod "kube-controller-manager-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.523650   14472 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-7fs6d" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.538278   14472 pod_ready.go:92] pod "kube-proxy-7fs6d" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.538278   14472 pod_ready.go:81] duration metric: took 14.628ms waiting for pod "kube-proxy-7fs6d" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.538278   14472 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.877653   14472 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.877653   14472 pod_ready.go:81] duration metric: took 339.3727ms waiting for pod "kube-scheduler-enable-default-cni-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.877653   14472 pod_ready.go:38] duration metric: took 2.4534397s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:13:30.877653   14472 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:13:30.893957   14472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:13:30.925352   14472 api_server.go:72] duration metric: took 6.0076026s to wait for apiserver process to appear ...
	I0229 19:13:30.925352   14472 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:13:30.925352   14472 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61235/healthz ...
	I0229 19:13:30.941376   14472 api_server.go:279] https://127.0.0.1:61235/healthz returned 200:
	ok
	I0229 19:13:30.945348   14472 api_server.go:141] control plane version: v1.28.4
	I0229 19:13:30.945348   14472 api_server.go:131] duration metric: took 19.9959ms to wait for apiserver health ...
	I0229 19:13:30.945348   14472 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:13:31.088416   14472 system_pods.go:59] 7 kube-system pods found
	I0229 19:13:31.088541   14472 system_pods.go:61] "coredns-5dd5756b68-xt2b4" [95171e04-c522-46e5-a759-27b5869d7d36] Running
	I0229 19:13:31.088541   14472 system_pods.go:61] "etcd-enable-default-cni-652900" [4ddbc764-adad-45c3-80e0-138cc7ef3f8c] Running
	I0229 19:13:31.088587   14472 system_pods.go:61] "kube-apiserver-enable-default-cni-652900" [047f3d07-34a0-424d-ad95-6c6d503db759] Running
	I0229 19:13:31.088587   14472 system_pods.go:61] "kube-controller-manager-enable-default-cni-652900" [6153d2a9-409e-4cd8-bc71-864e2d37041c] Running
	I0229 19:13:31.088587   14472 system_pods.go:61] "kube-proxy-7fs6d" [32abedad-6c34-4043-9e4e-524d2f955678] Running
	I0229 19:13:31.088587   14472 system_pods.go:61] "kube-scheduler-enable-default-cni-652900" [730eb52d-7be7-407f-a452-6d72396d7af2] Running
	I0229 19:13:31.088587   14472 system_pods.go:61] "storage-provisioner" [99f20e05-1657-4023-9d05-6c06d4ba3cf7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 19:13:31.088587   14472 system_pods.go:74] duration metric: took 143.2378ms to wait for pod list to return data ...
	I0229 19:13:31.088680   14472 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:13:31.269500   14472 default_sa.go:45] found service account: "default"
	I0229 19:13:31.269571   14472 default_sa.go:55] duration metric: took 180.8188ms for default service account to be created ...
	I0229 19:13:31.269571   14472 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:13:31.483245   14472 system_pods.go:86] 7 kube-system pods found
	I0229 19:13:31.483245   14472 system_pods.go:89] "coredns-5dd5756b68-xt2b4" [95171e04-c522-46e5-a759-27b5869d7d36] Running
	I0229 19:13:31.483245   14472 system_pods.go:89] "etcd-enable-default-cni-652900" [4ddbc764-adad-45c3-80e0-138cc7ef3f8c] Running
	I0229 19:13:31.483245   14472 system_pods.go:89] "kube-apiserver-enable-default-cni-652900" [047f3d07-34a0-424d-ad95-6c6d503db759] Running
	I0229 19:13:31.483338   14472 system_pods.go:89] "kube-controller-manager-enable-default-cni-652900" [6153d2a9-409e-4cd8-bc71-864e2d37041c] Running
	I0229 19:13:31.483338   14472 system_pods.go:89] "kube-proxy-7fs6d" [32abedad-6c34-4043-9e4e-524d2f955678] Running
	I0229 19:13:31.483338   14472 system_pods.go:89] "kube-scheduler-enable-default-cni-652900" [730eb52d-7be7-407f-a452-6d72396d7af2] Running
	I0229 19:13:31.483338   14472 system_pods.go:89] "storage-provisioner" [99f20e05-1657-4023-9d05-6c06d4ba3cf7] Running
	I0229 19:13:31.483338   14472 system_pods.go:126] duration metric: took 213.7647ms to wait for k8s-apps to be running ...
	I0229 19:13:31.483416   14472 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:13:31.497732   14472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:13:31.524180   14472 system_svc.go:56] duration metric: took 40.8415ms WaitForService to wait for kubelet.
	I0229 19:13:31.524221   14472 kubeadm.go:581] duration metric: took 6.6064669s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:13:31.524286   14472 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:13:31.678661   14472 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0229 19:13:31.678661   14472 node_conditions.go:123] node cpu capacity is 16
	I0229 19:13:31.678661   14472 node_conditions.go:105] duration metric: took 154.3731ms to run NodePressure ...
	I0229 19:13:31.678661   14472 start.go:228] waiting for startup goroutines ...
	I0229 19:13:31.678661   14472 start.go:233] waiting for cluster config update ...
	I0229 19:13:31.678661   14472 start.go:242] writing updated cluster config ...
	I0229 19:13:31.691140   14472 ssh_runner.go:195] Run: rm -f paused
	I0229 19:13:31.838239   14472 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:13:31.842047   14472 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-652900" cluster and "default" namespace by default
	I0229 19:13:27.471942    7948 pod_ready.go:102] pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:29.942886    7948 pod_ready.go:102] pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:32.443024    7948 pod_ready.go:102] pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:34.937307    7948 pod_ready.go:102] pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:36.940221    7948 pod_ready.go:102] pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:38.948090    7948 pod_ready.go:97] pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.85.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-02-29 19:13:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSta
teTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-02-29 19:13:27 +0000 UTC,FinishedAt:2024-02-29 19:13:37 +0000 UTC,ContainerID:docker://bca7169eab28926b840afa2f659355d18c4198796c896ac35344898736d3dfc8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://bca7169eab28926b840afa2f659355d18c4198796c896ac35344898736d3dfc8 Started:0xc002e748c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0229 19:13:38.948188    7948 pod_ready.go:81] duration metric: took 13.5310527s waiting for pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace to be "Ready" ...
	E0229 19:13:38.948188    7948 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-nxn4c" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 19:13:21 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.85.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-02-29 19:13:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running
:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-02-29 19:13:27 +0000 UTC,FinishedAt:2024-02-29 19:13:37 +0000 UTC,ContainerID:docker://bca7169eab28926b840afa2f659355d18c4198796c896ac35344898736d3dfc8,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://bca7169eab28926b840afa2f659355d18c4198796c896ac35344898736d3dfc8 Started:0xc002e748c0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0229 19:13:38.948272    7948 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-ppr7c" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:40.972198    7948 pod_ready.go:102] pod "coredns-5dd5756b68-ppr7c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:38.418046    2612 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-652900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (19.633482s)
	I0229 19:13:38.418046    2612 kic.go:203] duration metric: took 19.641515 seconds to extract preloaded images to volume
	I0229 19:13:38.435319    2612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 19:13:38.857095    2612 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:93 SystemTime:2024-02-29 19:13:38.816805162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 19:13:38.867079    2612 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0229 19:13:39.264013    2612 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-652900 --name kubenet-652900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-652900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-652900 --network kubenet-652900 --ip 192.168.76.2 --volume kubenet-652900:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0229 19:13:40.361162    2612 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-652900 --name kubenet-652900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-652900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-652900 --network kubenet-652900 --ip 192.168.76.2 --volume kubenet-652900:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08: (1.0961408s)
	I0229 19:13:40.379193    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Running}}
	I0229 19:13:40.659183    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Status}}
	I0229 19:13:40.922183    2612 cli_runner.go:164] Run: docker exec kubenet-652900 stat /var/lib/dpkg/alternatives/iptables
	I0229 19:13:41.268126    2612 oci.go:144] the created container "kubenet-652900" has a running status.
	I0229 19:13:41.268126    2612 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa...
	I0229 19:13:42.002876    2612 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0229 19:13:42.310877    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Status}}
	I0229 19:13:42.567865    2612 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0229 19:13:42.567865    2612 kic_runner.go:114] Args: [docker exec --privileged kubenet-652900 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0229 19:13:42.877874    2612 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa...
	I0229 19:13:42.978875    7948 pod_ready.go:102] pod "coredns-5dd5756b68-ppr7c" in "kube-system" namespace has status "Ready":"False"
	I0229 19:13:43.971525    7948 pod_ready.go:92] pod "coredns-5dd5756b68-ppr7c" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:43.971525    7948 pod_ready.go:81] duration metric: took 5.0232109s waiting for pod "coredns-5dd5756b68-ppr7c" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:43.972527    7948 pod_ready.go:78] waiting up to 15m0s for pod "etcd-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:43.983529    7948 pod_ready.go:92] pod "etcd-bridge-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:43.983529    7948 pod_ready.go:81] duration metric: took 11.0015ms waiting for pod "etcd-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:43.983529    7948 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:43.998802    7948 pod_ready.go:92] pod "kube-apiserver-bridge-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:43.998802    7948 pod_ready.go:81] duration metric: took 15.2729ms waiting for pod "kube-apiserver-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:43.998880    7948 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:44.014735    7948 pod_ready.go:92] pod "kube-controller-manager-bridge-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:44.014735    7948 pod_ready.go:81] duration metric: took 15.855ms waiting for pod "kube-controller-manager-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:44.014735    7948 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-vgpqv" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:44.029717    7948 pod_ready.go:92] pod "kube-proxy-vgpqv" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:44.029717    7948 pod_ready.go:81] duration metric: took 14.9819ms waiting for pod "kube-proxy-vgpqv" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:44.029717    7948 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:44.374959    7948 pod_ready.go:92] pod "kube-scheduler-bridge-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:44.375004    7948 pod_ready.go:81] duration metric: took 345.2847ms waiting for pod "kube-scheduler-bridge-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:44.375004    7948 pod_ready.go:38] duration metric: took 18.9762565s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:13:44.375093    7948 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:13:44.390677    7948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:13:44.418678    7948 api_server.go:72] duration metric: took 20.9247862s to wait for apiserver process to appear ...
	I0229 19:13:44.418678    7948 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:13:44.418678    7948 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61232/healthz ...
	I0229 19:13:44.432694    7948 api_server.go:279] https://127.0.0.1:61232/healthz returned 200:
	ok
	I0229 19:13:44.437672    7948 api_server.go:141] control plane version: v1.28.4
	I0229 19:13:44.437672    7948 api_server.go:131] duration metric: took 18.994ms to wait for apiserver health ...
	I0229 19:13:44.437672    7948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:13:44.584426    7948 system_pods.go:59] 7 kube-system pods found
	I0229 19:13:44.584426    7948 system_pods.go:61] "coredns-5dd5756b68-ppr7c" [b6dc096a-bd1c-495d-814c-97cf9df8c055] Running
	I0229 19:13:44.584426    7948 system_pods.go:61] "etcd-bridge-652900" [c7edf61c-e7dd-483b-b6ec-2c5cb356a38b] Running
	I0229 19:13:44.584426    7948 system_pods.go:61] "kube-apiserver-bridge-652900" [1c29d36d-e4e6-4de4-9d16-2883956e58b9] Running
	I0229 19:13:44.584426    7948 system_pods.go:61] "kube-controller-manager-bridge-652900" [1d57c399-a9bd-4838-811b-dca8932d15af] Running
	I0229 19:13:44.584426    7948 system_pods.go:61] "kube-proxy-vgpqv" [7654b9cc-c8d7-4041-8304-9f3e44ad85d4] Running
	I0229 19:13:44.584426    7948 system_pods.go:61] "kube-scheduler-bridge-652900" [b9a48085-e455-4c39-a6bb-99c5232898bb] Running
	I0229 19:13:44.584964    7948 system_pods.go:61] "storage-provisioner" [59049e0e-2666-4064-a7e5-7c286bd68480] Running
	I0229 19:13:44.584964    7948 system_pods.go:74] duration metric: took 147.2907ms to wait for pod list to return data ...
	I0229 19:13:44.584964    7948 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:13:44.766220    7948 default_sa.go:45] found service account: "default"
	I0229 19:13:44.766508    7948 default_sa.go:55] duration metric: took 181.5427ms for default service account to be created ...
	I0229 19:13:44.766605    7948 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:13:44.976238    7948 system_pods.go:86] 7 kube-system pods found
	I0229 19:13:44.976238    7948 system_pods.go:89] "coredns-5dd5756b68-ppr7c" [b6dc096a-bd1c-495d-814c-97cf9df8c055] Running
	I0229 19:13:44.976238    7948 system_pods.go:89] "etcd-bridge-652900" [c7edf61c-e7dd-483b-b6ec-2c5cb356a38b] Running
	I0229 19:13:44.976238    7948 system_pods.go:89] "kube-apiserver-bridge-652900" [1c29d36d-e4e6-4de4-9d16-2883956e58b9] Running
	I0229 19:13:44.976238    7948 system_pods.go:89] "kube-controller-manager-bridge-652900" [1d57c399-a9bd-4838-811b-dca8932d15af] Running
	I0229 19:13:44.976238    7948 system_pods.go:89] "kube-proxy-vgpqv" [7654b9cc-c8d7-4041-8304-9f3e44ad85d4] Running
	I0229 19:13:44.976238    7948 system_pods.go:89] "kube-scheduler-bridge-652900" [b9a48085-e455-4c39-a6bb-99c5232898bb] Running
	I0229 19:13:44.976238    7948 system_pods.go:89] "storage-provisioner" [59049e0e-2666-4064-a7e5-7c286bd68480] Running
	I0229 19:13:44.976238    7948 system_pods.go:126] duration metric: took 209.6314ms to wait for k8s-apps to be running ...
	I0229 19:13:44.976238    7948 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:13:44.988254    7948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:13:45.015171    7948 system_svc.go:56] duration metric: took 38.9327ms WaitForService to wait for kubelet.
	I0229 19:13:45.015171    7948 kubeadm.go:581] duration metric: took 21.5212748s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:13:45.015171    7948 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:13:45.175730    7948 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0229 19:13:45.175730    7948 node_conditions.go:123] node cpu capacity is 16
	I0229 19:13:45.175730    7948 node_conditions.go:105] duration metric: took 160.5579ms to run NodePressure ...
	I0229 19:13:45.175835    7948 start.go:228] waiting for startup goroutines ...
	I0229 19:13:45.175835    7948 start.go:233] waiting for cluster config update ...
	I0229 19:13:45.175835    7948 start.go:242] writing updated cluster config ...
	I0229 19:13:45.191186    7948 ssh_runner.go:195] Run: rm -f paused
	I0229 19:13:45.359309    7948 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:13:45.362773    7948 out.go:177] * Done! kubectl is now configured to use "bridge-652900" cluster and "default" namespace by default
	I0229 19:13:45.799031    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Status}}
	I0229 19:13:45.998215    2612 machine.go:88] provisioning docker machine ...
	I0229 19:13:45.998385    2612 ubuntu.go:169] provisioning hostname "kubenet-652900"
	I0229 19:13:46.012372    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:46.207363    2612 main.go:141] libmachine: Using SSH client type: native
	I0229 19:13:46.219385    2612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61320 <nil> <nil>}
	I0229 19:13:46.219385    2612 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubenet-652900 && echo "kubenet-652900" | sudo tee /etc/hostname
	I0229 19:13:46.421391    2612 main.go:141] libmachine: SSH cmd err, output: <nil>: kubenet-652900
	
	I0229 19:13:46.436584    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:46.600877    2612 main.go:141] libmachine: Using SSH client type: native
	I0229 19:13:46.600877    2612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61320 <nil> <nil>}
	I0229 19:13:46.600877    2612 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-652900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-652900/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-652900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 19:13:46.773469    2612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:13:46.773557    2612 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0229 19:13:46.773627    2612 ubuntu.go:177] setting up certificates
	I0229 19:13:46.773689    2612 provision.go:83] configureAuth start
	I0229 19:13:46.786260    2612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-652900
	I0229 19:13:46.982076    2612 provision.go:138] copyHostCerts
	I0229 19:13:46.982076    2612 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0229 19:13:46.982076    2612 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0229 19:13:46.982076    2612 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0229 19:13:46.984125    2612 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0229 19:13:46.984125    2612 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0229 19:13:46.985104    2612 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 19:13:46.986076    2612 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0229 19:13:46.986076    2612 exec_runner.go:203] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0229 19:13:46.987163    2612 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 19:13:46.988093    2612 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-652900 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-652900]
	I0229 19:13:47.239980    2612 provision.go:172] copyRemoteCerts
	I0229 19:13:47.250989    2612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 19:13:47.260006    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:47.471987    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:13:47.601640    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 19:13:47.652249    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 19:13:47.694499    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0229 19:13:47.741893    2612 provision.go:86] duration metric: configureAuth took 968.128ms
	I0229 19:13:47.741952    2612 ubuntu.go:193] setting minikube options for container-runtime
	I0229 19:13:47.742465    2612 config.go:182] Loaded profile config "kubenet-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:13:47.752547    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:47.964361    2612 main.go:141] libmachine: Using SSH client type: native
	I0229 19:13:47.964913    2612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61320 <nil> <nil>}
	I0229 19:13:47.964913    2612 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 19:13:48.147137    2612 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0229 19:13:48.147207    2612 ubuntu.go:71] root file system type: overlay
	I0229 19:13:48.147598    2612 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 19:13:48.165127    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:48.364211    2612 main.go:141] libmachine: Using SSH client type: native
	I0229 19:13:48.364211    2612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61320 <nil> <nil>}
	I0229 19:13:48.364211    2612 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 19:13:48.563981    2612 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 19:13:48.576329    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:48.768285    2612 main.go:141] libmachine: Using SSH client type: native
	I0229 19:13:48.768946    2612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x9c9d80] 0x9cc960 <nil>  [] 0s} 127.0.0.1 61320 <nil> <nil>}
	I0229 19:13:48.769011    2612 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 19:13:50.333539    2612 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-29 19:13:48.545414330 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0229 19:13:50.333539    2612 machine.go:91] provisioned docker machine in 4.3352884s
	I0229 19:13:50.333539    2612 client.go:171] LocalClient.Create took 35.6123745s
	I0229 19:13:50.333539    2612 start.go:167] duration metric: libmachine.API.Create for "kubenet-652900" took 35.6123745s
	I0229 19:13:50.333539    2612 start.go:300] post-start starting for "kubenet-652900" (driver="docker")
	I0229 19:13:50.333539    2612 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 19:13:50.350517    2612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 19:13:50.367538    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:50.599524    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:13:50.772548    2612 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 19:13:50.784535    2612 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 19:13:50.784535    2612 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 19:13:50.784535    2612 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 19:13:50.784535    2612 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 19:13:50.784535    2612 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0229 19:13:50.785536    2612 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0229 19:13:50.788529    2612 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem -> 56602.pem in /etc/ssl/certs
	I0229 19:13:50.810525    2612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 19:13:50.836522    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /etc/ssl/certs/56602.pem (1708 bytes)
	I0229 19:13:50.886705    2612 start.go:303] post-start completed in 552.1719ms
	I0229 19:13:50.907695    2612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-652900
	I0229 19:13:51.140702    2612 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\config.json ...
	I0229 19:13:51.157686    2612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 19:13:51.170696    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:51.397696    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:13:51.556695    2612 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 19:13:51.568694    2612 start.go:128] duration metric: createHost completed in 36.8579404s
	I0229 19:13:51.568694    2612 start.go:83] releasing machines lock for "kubenet-652900", held for 36.8579404s
	I0229 19:13:51.577684    2612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-652900
	I0229 19:13:51.797695    2612 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 19:13:51.812706    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:51.812706    2612 ssh_runner.go:195] Run: cat /version.json
	I0229 19:13:51.832031    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:52.044041    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:13:52.059032    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:13:52.363008    2612 ssh_runner.go:195] Run: systemctl --version
	I0229 19:13:52.388009    2612 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 19:13:52.422008    2612 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0229 19:13:52.445051    2612 start.go:419] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0229 19:13:52.460012    2612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 19:13:52.548022    2612 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 19:13:52.548022    2612 start.go:475] detecting cgroup driver to use...
	I0229 19:13:52.548022    2612 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 19:13:52.549019    2612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:13:52.607010    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 19:13:52.652014    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 19:13:52.672010    2612 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 19:13:52.686244    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 19:13:52.746458    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:13:52.782063    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 19:13:52.833076    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:13:52.867055    2612 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 19:13:52.906101    2612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 19:13:52.942071    2612 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 19:13:52.967059    2612 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 19:13:53.011079    2612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:13:53.210476    2612 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 19:13:53.382013    2612 start.go:475] detecting cgroup driver to use...
	I0229 19:13:53.382113    2612 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 19:13:53.394816    2612 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 19:13:53.423148    2612 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0229 19:13:53.435852    2612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 19:13:53.462471    2612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:13:53.514251    2612 ssh_runner.go:195] Run: which cri-dockerd
	I0229 19:13:53.551030    2612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 19:13:53.568013    2612 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (193 bytes)
	I0229 19:13:53.633924    2612 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 19:13:53.791337    2612 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 19:13:53.943954    2612 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 19:13:53.944214    2612 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 19:13:53.987278    2612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:13:54.147094    2612 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 19:13:55.464411    2612 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.3173062s)
	I0229 19:13:55.478569    2612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 19:13:55.518584    2612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 19:13:55.555569    2612 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 19:13:55.740612    2612 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 19:13:55.934236    2612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:13:56.147139    2612 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 19:13:56.190765    2612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 19:13:56.230733    2612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:13:56.425544    2612 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 19:13:56.592799    2612 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 19:13:56.607787    2612 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 19:13:56.618803    2612 start.go:543] Will wait 60s for crictl version
	I0229 19:13:56.632786    2612 ssh_runner.go:195] Run: which crictl
	I0229 19:13:56.668647    2612 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 19:13:56.778647    2612 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.3
	RuntimeApiVersion:  v1
	I0229 19:13:56.790619    2612 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:13:56.865585    2612 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:13:56.926712    2612 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.3 ...
	I0229 19:13:56.937717    2612 cli_runner.go:164] Run: docker exec -t kubenet-652900 dig +short host.docker.internal
	I0229 19:13:57.246510    2612 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0229 19:13:57.259573    2612 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0229 19:13:57.274453    2612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:13:57.305447    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:13:57.476401    2612 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:13:57.486559    2612 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 19:13:57.537158    2612 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 19:13:57.537260    2612 docker.go:615] Images already preloaded, skipping extraction
	I0229 19:13:57.545703    2612 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 19:13:57.594686    2612 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 19:13:57.594777    2612 cache_images.go:84] Images are preloaded, skipping loading
	I0229 19:13:57.607995    2612 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 19:13:57.735844    2612 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0229 19:13:57.735844    2612 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 19:13:57.735844    2612 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-652900 NodeName:kubenet-652900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 19:13:57.736564    2612 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-652900"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 19:13:57.736756    2612 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubenet-652900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:kubenet-652900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 19:13:57.750093    2612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 19:13:57.767026    2612 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 19:13:57.779031    2612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 19:13:57.800011    2612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (400 bytes)
	I0229 19:13:57.832777    2612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 19:13:57.867951    2612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0229 19:13:57.920606    2612 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0229 19:13:57.934968    2612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:13:57.963548    2612 certs.go:56] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900 for IP: 192.168.76.2
	I0229 19:13:57.963548    2612 certs.go:190] acquiring lock for shared ca certs: {Name:mk3bd6e475aadf28590677101b36f61c132d4290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:57.963548    2612 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0229 19:13:57.964547    2612 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0229 19:13:57.965544    2612 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.key
	I0229 19:13:57.965544    2612 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt with IP's: []
	I0229 19:13:58.523802    2612 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt ...
	I0229 19:13:58.523802    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.crt: {Name:mkee2c5fdf95fdcd5db1cd86782d60f0b9c24b77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:58.526067    2612 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.key ...
	I0229 19:13:58.526152    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\client.key: {Name:mkec6556a9f6406d9a4ea13eed590466865d0b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:58.527431    2612 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.key.31bdca25
	I0229 19:13:58.527477    2612 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 19:13:58.901885    2612 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.crt.31bdca25 ...
	I0229 19:13:58.901885    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.crt.31bdca25: {Name:mk375451f66bc7990b7c902a22148a16103489d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:58.903909    2612 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.key.31bdca25 ...
	I0229 19:13:58.903909    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.key.31bdca25: {Name:mke9d8ca8a25586f609361d99a26cb8aa6f8a1ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:58.904240    2612 certs.go:337] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.crt.31bdca25 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.crt
	I0229 19:13:58.916037    2612 certs.go:341] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.key.31bdca25 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.key
	I0229 19:13:58.917036    2612 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.key
	I0229 19:13:58.917036    2612 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.crt with IP's: []
	I0229 19:13:59.049913    2612 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.crt ...
	I0229 19:13:59.049913    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.crt: {Name:mk3cb52698f22961884ce5c5423d4c57ac536599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:59.050560    2612 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.key ...
	I0229 19:13:59.051563    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.key: {Name:mk8305f3a6a959f2fcfbf796deb5bb3bd454352f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:59.066147    2612 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem (1338 bytes)
	W0229 19:13:59.066501    2612 certs.go:433] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660_empty.pem, impossibly tiny 0 bytes
	I0229 19:13:59.066501    2612 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0229 19:13:59.066847    2612 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0229 19:13:59.067223    2612 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 19:13:59.067223    2612 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 19:13:59.067738    2612 certs.go:437] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem (1708 bytes)
	I0229 19:13:59.069733    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 19:13:59.124686    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 19:13:59.167425    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 19:13:59.206764    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-652900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 19:13:59.249774    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 19:13:59.290644    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 19:13:59.332784    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 19:13:59.383817    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 19:13:59.429790    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 19:13:59.472072    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\5660.pem --> /usr/share/ca-certificates/5660.pem (1338 bytes)
	I0229 19:13:59.514749    2612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\56602.pem --> /usr/share/ca-certificates/56602.pem (1708 bytes)
	I0229 19:13:59.562316    2612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 19:13:59.607237    2612 ssh_runner.go:195] Run: openssl version
	I0229 19:13:59.633559    2612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56602.pem && ln -fs /usr/share/ca-certificates/56602.pem /etc/ssl/certs/56602.pem"
	I0229 19:13:59.662657    2612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56602.pem
	I0229 19:13:59.674717    2612 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:50 /usr/share/ca-certificates/56602.pem
	I0229 19:13:59.685887    2612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56602.pem
	I0229 19:13:59.714711    2612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56602.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 19:13:59.747421    2612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 19:13:59.775336    2612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:13:59.786331    2612 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:13:59.797561    2612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:13:59.826615    2612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 19:13:59.861926    2612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5660.pem && ln -fs /usr/share/ca-certificates/5660.pem /etc/ssl/certs/5660.pem"
	I0229 19:13:59.895051    2612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5660.pem
	I0229 19:13:59.906595    2612 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:50 /usr/share/ca-certificates/5660.pem
	I0229 19:13:59.918340    2612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5660.pem
	I0229 19:13:59.947694    2612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5660.pem /etc/ssl/certs/51391683.0"
	I0229 19:13:59.980095    2612 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 19:13:59.992598    2612 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 19:13:59.992598    2612 kubeadm.go:404] StartCluster: {Name:kubenet-652900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-652900 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:14:00.004318    2612 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 19:14:00.064601    2612 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 19:14:00.095887    2612 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:14:00.116118    2612 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 19:14:00.126192    2612 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:14:00.145733    2612 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:14:00.145733    2612 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 19:14:00.327835    2612 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0229 19:14:00.490607    2612 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:14:19.354361    2612 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 19:14:19.354361    2612 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:14:19.354361    2612 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:14:19.354361    2612 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:14:19.355350    2612 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:14:19.355350    2612 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:14:19.359424    2612 out.go:204]   - Generating certificates and keys ...
	I0229 19:14:19.359424    2612 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:14:19.359424    2612 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:14:19.360356    2612 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 19:14:19.360356    2612 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 19:14:19.360356    2612 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 19:14:19.360356    2612 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 19:14:19.361362    2612 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 19:14:19.361362    2612 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubenet-652900 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0229 19:14:19.361362    2612 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 19:14:19.362360    2612 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubenet-652900 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0229 19:14:19.362360    2612 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 19:14:19.362360    2612 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 19:14:19.362360    2612 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 19:14:19.362360    2612 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:14:19.362360    2612 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:14:19.362360    2612 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:14:19.363362    2612 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:14:19.363362    2612 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:14:19.363362    2612 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:14:19.363362    2612 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:14:19.366352    2612 out.go:204]   - Booting up control plane ...
	I0229 19:14:19.366352    2612 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:14:19.366352    2612 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:14:19.367362    2612 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:14:19.367362    2612 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:14:19.367362    2612 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:14:19.367362    2612 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 19:14:19.368365    2612 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:14:19.368365    2612 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.522477 seconds
	I0229 19:14:19.368365    2612 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 19:14:19.369350    2612 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 19:14:19.369350    2612 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 19:14:19.369350    2612 kubeadm.go:322] [mark-control-plane] Marking the node kubenet-652900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 19:14:19.369350    2612 kubeadm.go:322] [bootstrap-token] Using token: dddyg5.8xck91887j66rhnf
	I0229 19:14:19.373356    2612 out.go:204]   - Configuring RBAC rules ...
	I0229 19:14:19.373356    2612 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 19:14:19.373356    2612 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 19:14:19.374360    2612 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 19:14:19.375378    2612 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 19:14:19.375378    2612 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 19:14:19.375378    2612 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 19:14:19.376357    2612 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 19:14:19.376357    2612 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 19:14:19.376357    2612 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 19:14:19.376357    2612 kubeadm.go:322] 
	I0229 19:14:19.376357    2612 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 19:14:19.376357    2612 kubeadm.go:322] 
	I0229 19:14:19.376357    2612 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 19:14:19.376357    2612 kubeadm.go:322] 
	I0229 19:14:19.377351    2612 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 19:14:19.377351    2612 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 19:14:19.377351    2612 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 19:14:19.377351    2612 kubeadm.go:322] 
	I0229 19:14:19.377351    2612 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 19:14:19.377351    2612 kubeadm.go:322] 
	I0229 19:14:19.377351    2612 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 19:14:19.378373    2612 kubeadm.go:322] 
	I0229 19:14:19.378373    2612 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 19:14:19.378373    2612 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 19:14:19.379356    2612 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 19:14:19.379356    2612 kubeadm.go:322] 
	I0229 19:14:19.379356    2612 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 19:14:19.379356    2612 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 19:14:19.379356    2612 kubeadm.go:322] 
	I0229 19:14:19.379356    2612 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dddyg5.8xck91887j66rhnf \
	I0229 19:14:19.379356    2612 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:80eb25c1c6cbd8ac057f190e22b9147f2ead62b31e10db2e8c638577512ad3fe \
	I0229 19:14:19.379356    2612 kubeadm.go:322] 	--control-plane 
	I0229 19:14:19.380355    2612 kubeadm.go:322] 
	I0229 19:14:19.380355    2612 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 19:14:19.380355    2612 kubeadm.go:322] 
	I0229 19:14:19.380355    2612 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dddyg5.8xck91887j66rhnf \
	I0229 19:14:19.380355    2612 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:80eb25c1c6cbd8ac057f190e22b9147f2ead62b31e10db2e8c638577512ad3fe 
	I0229 19:14:19.380355    2612 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0229 19:14:19.380355    2612 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:14:19.402354    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=kubenet-652900 minikube.k8s.io/updated_at=2024_02_29T19_14_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:19.402354    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:19.403357    2612 ops.go:34] apiserver oom_adj: -16
	I0229 19:14:20.245287    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:20.753175    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:21.244919    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:21.751126    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:22.248621    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:22.749928    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:23.256705    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:23.744510    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:24.256825    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:24.755716    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:25.265790    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:25.755576    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:26.257068    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:26.756699    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:27.257545    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:27.749199    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:28.251577    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:28.758878    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:29.255489    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:29.758118    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:30.257221    2612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:14:31.117825    2612 kubeadm.go:1088] duration metric: took 11.7373731s to wait for elevateKubeSystemPrivileges.
	I0229 19:14:31.117825    2612 kubeadm.go:406] StartCluster complete in 31.1249709s
	I0229 19:14:31.117825    2612 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:14:31.117825    2612 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 19:14:31.120843    2612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:14:31.122834    2612 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:14:31.122834    2612 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:14:31.122834    2612 addons.go:69] Setting storage-provisioner=true in profile "kubenet-652900"
	I0229 19:14:31.122834    2612 addons.go:234] Setting addon storage-provisioner=true in "kubenet-652900"
	I0229 19:14:31.122834    2612 addons.go:69] Setting default-storageclass=true in profile "kubenet-652900"
	I0229 19:14:31.122834    2612 host.go:66] Checking if "kubenet-652900" exists ...
	I0229 19:14:31.122834    2612 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-652900"
	I0229 19:14:31.122834    2612 config.go:182] Loaded profile config "kubenet-652900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:14:31.150821    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Status}}
	I0229 19:14:31.151818    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Status}}
	W0229 19:14:31.299834    2612 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "kubenet-652900" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0229 19:14:31.300834    2612 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0229 19:14:31.300834    2612 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 19:14:31.305839    2612 out.go:177] * Verifying Kubernetes components...
	I0229 19:14:31.330826    2612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:14:31.364840    2612 addons.go:234] Setting addon default-storageclass=true in "kubenet-652900"
	I0229 19:14:31.364840    2612 host.go:66] Checking if "kubenet-652900" exists ...
	I0229 19:14:31.378831    2612 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:14:31.381842    2612 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:14:31.381842    2612 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:14:31.394819    2612 cli_runner.go:164] Run: docker container inspect kubenet-652900 --format={{.State.Status}}
	I0229 19:14:31.399830    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:14:31.589831    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:14:31.604828    2612 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:14:31.604828    2612 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:14:31.619827    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:14:31.798848    2612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61320 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-652900\id_rsa Username:docker}
	I0229 19:14:31.891515    2612 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 19:14:31.909695    2612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-652900
	I0229 19:14:32.008678    2612 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:14:32.145670    2612 node_ready.go:35] waiting up to 15m0s for node "kubenet-652900" to be "Ready" ...
	I0229 19:14:32.189555    2612 node_ready.go:49] node "kubenet-652900" has status "Ready":"True"
	I0229 19:14:32.189555    2612 node_ready.go:38] duration metric: took 43.8841ms waiting for node "kubenet-652900" to be "Ready" ...
	I0229 19:14:32.190160    2612 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:14:32.213455    2612 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace to be "Ready" ...
	I0229 19:14:32.217343    2612 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:14:34.295963    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:36.317580    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:36.490883    2612 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.5993305s)
	I0229 19:14:36.490883    2612 start.go:929] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0229 19:14:36.880572    2612 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.6631903s)
	I0229 19:14:36.880572    2612 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.8718538s)
	I0229 19:14:36.909572    2612 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 19:14:36.912553    2612 addons.go:505] enable addons completed in 5.7896712s: enabled=[storage-provisioner default-storageclass]
	I0229 19:14:38.754895    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:41.233671    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:43.236985    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:45.249590    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:47.734437    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:49.738864    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:51.740358    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:53.741414    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:56.310956    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:14:58.732892    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:00.734690    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:02.740366    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:05.234073    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:07.242407    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:09.246915    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:11.791908    2612 pod_ready.go:102] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:12.235948    2612 pod_ready.go:92] pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:12.235948    2612 pod_ready.go:81] duration metric: took 40.0221652s waiting for pod "coredns-5dd5756b68-4bbhp" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:12.235948    2612 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-n9887" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:14.255629    2612 pod_ready.go:102] pod "coredns-5dd5756b68-n9887" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:16.276814    2612 pod_ready.go:102] pod "coredns-5dd5756b68-n9887" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:18.762378    2612 pod_ready.go:102] pod "coredns-5dd5756b68-n9887" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:20.767367    2612 pod_ready.go:102] pod "coredns-5dd5756b68-n9887" in "kube-system" namespace has status "Ready":"False"
	I0229 19:15:22.267101    2612 pod_ready.go:92] pod "coredns-5dd5756b68-n9887" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:22.267101    2612 pod_ready.go:81] duration metric: took 10.0310706s waiting for pod "coredns-5dd5756b68-n9887" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.267101    2612 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.280015    2612 pod_ready.go:92] pod "etcd-kubenet-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:22.280015    2612 pod_ready.go:81] duration metric: took 12.9144ms waiting for pod "etcd-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.280015    2612 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.290624    2612 pod_ready.go:92] pod "kube-apiserver-kubenet-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:22.290624    2612 pod_ready.go:81] duration metric: took 10.6086ms waiting for pod "kube-apiserver-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.290624    2612 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.301842    2612 pod_ready.go:92] pod "kube-controller-manager-kubenet-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:22.301842    2612 pod_ready.go:81] duration metric: took 11.2176ms waiting for pod "kube-controller-manager-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.301842    2612 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-f78bb" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.316255    2612 pod_ready.go:92] pod "kube-proxy-f78bb" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:22.316255    2612 pod_ready.go:81] duration metric: took 14.4132ms waiting for pod "kube-proxy-f78bb" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.316255    2612 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.666846    2612 pod_ready.go:92] pod "kube-scheduler-kubenet-652900" in "kube-system" namespace has status "Ready":"True"
	I0229 19:15:22.666846    2612 pod_ready.go:81] duration metric: took 350.5879ms waiting for pod "kube-scheduler-kubenet-652900" in "kube-system" namespace to be "Ready" ...
	I0229 19:15:22.666973    2612 pod_ready.go:38] duration metric: took 50.4763995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:15:22.666973    2612 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:15:22.680455    2612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:15:22.707548    2612 api_server.go:72] duration metric: took 51.4062923s to wait for apiserver process to appear ...
	I0229 19:15:22.708559    2612 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:15:22.708559    2612 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61319/healthz ...
	I0229 19:15:22.724191    2612 api_server.go:279] https://127.0.0.1:61319/healthz returned 200:
	ok
	I0229 19:15:22.729601    2612 api_server.go:141] control plane version: v1.28.4
	I0229 19:15:22.729601    2612 api_server.go:131] duration metric: took 21.0421ms to wait for apiserver health ...
	I0229 19:15:22.729601    2612 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:15:22.878982    2612 system_pods.go:59] 8 kube-system pods found
	I0229 19:15:22.879228    2612 system_pods.go:61] "coredns-5dd5756b68-4bbhp" [d4f7e92e-7989-4dc7-926b-ffe4535d3777] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "coredns-5dd5756b68-n9887" [7fd56343-3f9f-4561-8ec8-70b32fb4f72b] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "etcd-kubenet-652900" [e60858b1-e4fe-4ec9-b9e6-0621f60cd9d9] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "kube-apiserver-kubenet-652900" [3d940450-6611-48d3-b390-3df5a9a0b5eb] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "kube-controller-manager-kubenet-652900" [9dc2da02-9ef9-40ae-9bfb-bc21477a0f51] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "kube-proxy-f78bb" [20d02d18-4fa1-4852-b217-fb7193effb23] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "kube-scheduler-kubenet-652900" [7fb4828b-3376-4b94-bde0-1cd982dbefb1] Running
	I0229 19:15:22.879228    2612 system_pods.go:61] "storage-provisioner" [d23b50f2-9878-47dc-8185-56087fe44a01] Running
	I0229 19:15:22.879228    2612 system_pods.go:74] duration metric: took 149.6258ms to wait for pod list to return data ...
	I0229 19:15:22.879287    2612 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:15:23.060507    2612 default_sa.go:45] found service account: "default"
	I0229 19:15:23.060620    2612 default_sa.go:55] duration metric: took 181.3317ms for default service account to be created ...
	I0229 19:15:23.060620    2612 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:15:23.272885    2612 system_pods.go:86] 8 kube-system pods found
	I0229 19:15:23.272885    2612 system_pods.go:89] "coredns-5dd5756b68-4bbhp" [d4f7e92e-7989-4dc7-926b-ffe4535d3777] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "coredns-5dd5756b68-n9887" [7fd56343-3f9f-4561-8ec8-70b32fb4f72b] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "etcd-kubenet-652900" [e60858b1-e4fe-4ec9-b9e6-0621f60cd9d9] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "kube-apiserver-kubenet-652900" [3d940450-6611-48d3-b390-3df5a9a0b5eb] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "kube-controller-manager-kubenet-652900" [9dc2da02-9ef9-40ae-9bfb-bc21477a0f51] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "kube-proxy-f78bb" [20d02d18-4fa1-4852-b217-fb7193effb23] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "kube-scheduler-kubenet-652900" [7fb4828b-3376-4b94-bde0-1cd982dbefb1] Running
	I0229 19:15:23.272885    2612 system_pods.go:89] "storage-provisioner" [d23b50f2-9878-47dc-8185-56087fe44a01] Running
	I0229 19:15:23.272885    2612 system_pods.go:126] duration metric: took 212.2631ms to wait for k8s-apps to be running ...
	I0229 19:15:23.272885    2612 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:15:23.285498    2612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:15:23.308782    2612 system_svc.go:56] duration metric: took 35.8969ms WaitForService to wait for kubelet.
	I0229 19:15:23.308782    2612 kubeadm.go:581] duration metric: took 52.007522s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:15:23.308782    2612 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:15:23.471411    2612 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0229 19:15:23.471411    2612 node_conditions.go:123] node cpu capacity is 16
	I0229 19:15:23.471411    2612 node_conditions.go:105] duration metric: took 162.6278ms to run NodePressure ...
	I0229 19:15:23.471526    2612 start.go:228] waiting for startup goroutines ...
	I0229 19:15:23.471526    2612 start.go:233] waiting for cluster config update ...
	I0229 19:15:23.471526    2612 start.go:242] writing updated cluster config ...
	I0229 19:15:23.483867    2612 ssh_runner.go:195] Run: rm -f paused
	I0229 19:15:23.630356    2612 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:15:23.634795    2612 out.go:177] * Done! kubectl is now configured to use "kubenet-652900" cluster and "default" namespace by default
	I0229 19:16:05.094764    3012 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:16:05.095215    3012 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:16:05.099756    3012 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:16:05.100306    3012 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:16:05.100596    3012 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:16:05.100925    3012 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:16:05.100925    3012 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:16:05.101471    3012 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:16:05.101798    3012 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:16:05.101940    3012 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:16:05.102008    3012 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:16:05.107021    3012 out.go:204]   - Generating certificates and keys ...
	I0229 19:16:05.107311    3012 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:16:05.107636    3012 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:16:05.107851    3012 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:16:05.108092    3012 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:16:05.108199    3012 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:16:05.108199    3012 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:16:05.108199    3012 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:16:05.108852    3012 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:16:05.109114    3012 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:16:05.109396    3012 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:16:05.109536    3012 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:16:05.109963    3012 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:16:05.110112    3012 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:16:05.110409    3012 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:16:05.110687    3012 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:16:05.110835    3012 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:16:05.111381    3012 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:16:05.113856    3012 out.go:204]   - Booting up control plane ...
	I0229 19:16:05.114066    3012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:16:05.114066    3012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:16:05.114066    3012 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:16:05.115027    3012 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:16:05.116054    3012 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:16:05.116054    3012 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:16:05.116054    3012 kubeadm.go:322] 
	I0229 19:16:05.116054    3012 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:16:05.116054    3012 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:16:05.116054    3012 kubeadm.go:322] 
	I0229 19:16:05.116751    3012 kubeadm.go:322] This error is likely caused by:
	I0229 19:16:05.116871    3012 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:16:05.116979    3012 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:16:05.116979    3012 kubeadm.go:322] 
	I0229 19:16:05.117821    3012 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:16:05.118155    3012 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:16:05.118181    3012 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:16:05.118181    3012 kubeadm.go:322] 
	I0229 19:16:05.118181    3012 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:16:05.118181    3012 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:16:05.118968    3012 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:16:05.119090    3012 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:16:05.119329    3012 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:16:05.119561    3012 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 19:16:05.119651    3012 kubeadm.go:406] StartCluster complete in 12m31.0799276s
	I0229 19:16:05.129772    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:16:05.171665    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.171665    3012 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:16:05.179666    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:16:05.229862    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.229862    3012 logs.go:278] No container was found matching "etcd"
	I0229 19:16:05.249214    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:16:05.307193    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.307193    3012 logs.go:278] No container was found matching "coredns"
	I0229 19:16:05.316151    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:16:05.362653    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.362653    3012 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:16:05.370651    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:16:05.435306    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.435363    3012 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:16:05.446405    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:16:05.497155    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.497695    3012 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:16:05.510707    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:16:05.553447    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.553447    3012 logs.go:278] No container was found matching "kindnet"
	I0229 19:16:05.563373    3012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 19:16:05.608587    3012 logs.go:276] 0 containers: []
	W0229 19:16:05.608587    3012 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 19:16:05.608587    3012 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:16:05.608587    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:16:05.762137    3012 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:16:05.762137    3012 logs.go:123] Gathering logs for Docker ...
	I0229 19:16:05.762137    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:16:05.794822    3012 logs.go:123] Gathering logs for container status ...
	I0229 19:16:05.794822    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 19:16:05.881247    3012 logs.go:123] Gathering logs for kubelet ...
	I0229 19:16:05.881247    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 19:16:05.936905    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:44 old-k8s-version-718400 kubelet[11316]: E0229 19:15:44.334761   11316 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:16:05.943928    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:47 old-k8s-version-718400 kubelet[11316]: E0229 19:15:47.331645   11316 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:16:05.955775    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:52 old-k8s-version-718400 kubelet[11316]: E0229 19:15:52.364501   11316 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	W0229 19:16:05.959784    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:54 old-k8s-version-718400 kubelet[11316]: E0229 19:15:54.343799   11316 pod_workers.go:191] Error syncing pod b3d303074fe0ca1d42a8bd9ed248df09 ("kube-scheduler-old-k8s-version-718400_kube-system(b3d303074fe0ca1d42a8bd9ed248df09)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.16.0\" is not set"
	W0229 19:16:05.965770    3012 logs.go:138] Found kubelet problem: Feb 29 19:15:56 old-k8s-version-718400 kubelet[11316]: E0229 19:15:56.333115   11316 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	W0229 19:16:05.979274    3012 logs.go:138] Found kubelet problem: Feb 29 19:16:02 old-k8s-version-718400 kubelet[11316]: E0229 19:16:02.344608   11316 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	W0229 19:16:05.982275    3012 logs.go:138] Found kubelet problem: Feb 29 19:16:03 old-k8s-version-718400 kubelet[11316]: E0229 19:16:03.341406   11316 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0229 19:16:05.988288    3012 logs.go:123] Gathering logs for dmesg ...
	I0229 19:16:05.988288    3012 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0229 19:16:06.015433    3012 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 19:16:06.015433    3012 out.go:239] * 
	W0229 19:16:06.015433    3012 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:16:06.015433    3012 out.go:239] * 
	W0229 19:16:06.017885    3012 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 19:16:06.022944    3012 out.go:177] X Problems detected in kubelet:
	I0229 19:16:06.028715    3012 out.go:177]   Feb 29 19:15:44 old-k8s-version-718400 kubelet[11316]: E0229 19:15:44.334761   11316 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	I0229 19:16:06.033657    3012 out.go:177]   Feb 29 19:15:47 old-k8s-version-718400 kubelet[11316]: E0229 19:15:47.331645   11316 pod_workers.go:191] Error syncing pod 93c07667c1ed2e5ef7b29333781a45af ("etcd-old-k8s-version-718400_kube-system(93c07667c1ed2e5ef7b29333781a45af)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.3.15-0\": Id or size of image \"k8s.gcr.io/etcd:3.3.15-0\" is not set"
	I0229 19:16:06.038773    3012 out.go:177]   Feb 29 19:15:52 old-k8s-version-718400 kubelet[11316]: E0229 19:15:52.364501   11316 pod_workers.go:191] Error syncing pod 1b9be8fe0c63c9db9fa83c840ce11e96 ("kube-apiserver-old-k8s-version-718400_kube-system(1b9be8fe0c63c9db9fa83c840ce11e96)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.16.0\" is not set"
	I0229 19:16:06.045519    3012 out.go:177] 
	W0229 19:16:06.047748    3012 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:16:06.047748    3012 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 19:16:06.047748    3012 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 19:16:06.051267    3012 out.go:177] 
	
	
	==> Docker <==
	Feb 29 19:03:23 old-k8s-version-718400 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 19:03:23 old-k8s-version-718400 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 19:03:23 old-k8s-version-718400 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 19:03:23 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:23.908449437Z" level=info msg="Starting up"
	Feb 29 19:03:25 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:25.453074169Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 19:03:29 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:29.617627017Z" level=info msg="Loading containers: start."
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.067844531Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.886492123Z" level=info msg="Loading containers: done."
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.963282056Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.963370460Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.963427362Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.963436363Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.963528467Z" level=info msg="Docker daemon" commit=f417435 containerd-snapshotter=false storage-driver=overlay2 version=25.0.3
	Feb 29 19:03:30 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:30.963595770Z" level=info msg="Daemon has completed initialization"
	Feb 29 19:03:31 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:31.037611678Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 19:03:31 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:03:31.037784386Z" level=info msg="API listen on [::]:2376"
	Feb 29 19:03:31 old-k8s-version-718400 systemd[1]: Started Docker Application Container Engine.
	Feb 29 19:07:53 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:07:53.877388254Z" level=info msg="ignoring event" container=00dcd44138414fbd8965f28b931eb93164beb21430ec662545b929dfe6822dcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:07:54 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:07:54.431453799Z" level=info msg="ignoring event" container=58acdae9ec28479642033ce37d3db918ab22684a2dafc6360486626387f00593 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:07:54 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:07:54.715596600Z" level=info msg="ignoring event" container=7cb8ba66f44be22d5926c76ced15a7c22abc36ed1844c239bd119d2df8cf1bfa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:07:55 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:07:55.002448473Z" level=info msg="ignoring event" container=7623f76bfea3ea27f4702817b0a177403ec6ad37f0ad9034d57a14d6d76bbe5b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:11:59 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:11:59.168499080Z" level=info msg="ignoring event" container=286f5c24dc54e22c277c2d7533deee004574c731c5a32688673184742035419f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:11:59 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:11:59.498058343Z" level=info msg="ignoring event" container=ab1416acf165ce195fbbab48c0bf328d249009519a6a04e143342678dc5615a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:11:59 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:11:59.948431317Z" level=info msg="ignoring event" container=07bad9d32f3c93a5778f334aed71e422983f5ddb923821f3a504c2ed84f9ff1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:12:01 old-k8s-version-718400 dockerd[1108]: time="2024-02-29T19:12:01.183444154Z" level=info msg="ignoring event" container=1af61f1ac1c706a76384505a3e4aeedf60474c1ab76fee0df282eadc631a1d2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	
	
	==> kernel <==
	 19:21:32 up  3:26,  0 users,  load average: 0.19, 2.02, 4.00
	Linux old-k8s-version-718400 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kubelet <==
	Feb 29 19:21:31 old-k8s-version-718400 kubelet[11316]: E0229 19:21:31.653579   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:31 old-k8s-version-718400 kubelet[11316]: E0229 19:21:31.754388   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:31 old-k8s-version-718400 kubelet[11316]: E0229 19:21:31.793016   11316 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-718400&limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:21:31 old-k8s-version-718400 kubelet[11316]: E0229 19:21:31.854947   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:31 old-k8s-version-718400 kubelet[11316]: E0229 19:21:31.956039   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:31 old-k8s-version-718400 kubelet[11316]: E0229 19:21:31.993839   11316 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.057014   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.157649   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.192280   11316 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.258429   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: I0229 19:21:32.289314   11316 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.334543   11316 remote_image.go:94] ImageStatus failed: Id or size of image "k8s.gcr.io/kube-controller-manager:v1.16.0" is not set
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.334644   11316 kuberuntime_image.go:85] ImageStatus for image {"k8s.gcr.io/kube-controller-manager:v1.16.0"} failed: Id or size of image "k8s.gcr.io/kube-controller-manager:v1.16.0" is not set
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.334762   11316 kuberuntime_manager.go:783] container start failed: ImageInspectError: Failed to inspect image "k8s.gcr.io/kube-controller-manager:v1.16.0": Id or size of image "k8s.gcr.io/kube-controller-manager:v1.16.0" is not set
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.334797   11316 pod_workers.go:191] Error syncing pod 9fc427d2e6746d2b3f18846f6f0fcafb ("kube-controller-manager-old-k8s-version-718400_kube-system(9fc427d2e6746d2b3f18846f6f0fcafb)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.16.0\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.16.0\" is not set"
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.358829   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.392505   11316 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.459511   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.560587   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.595801   11316 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)old-k8s-version-718400&limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.661448   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.762126   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.794730   11316 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)old-k8s-version-718400&limit=500&resourceVersion=0: dial tcp 192.168.103.2:8443: connect: connection refused
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.862736   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	Feb 29 19:21:32 old-k8s-version-718400 kubelet[11316]: E0229 19:21:32.963312   11316 kubelet.go:2267] node "old-k8s-version-718400" not found
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:21:31.424229    9228 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-718400 -n old-k8s-version-718400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-718400 -n old-k8s-version-718400: exit status 2 (1.1901878s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:21:33.626707   13504 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-718400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (322.33s)

                                                
                                    

Test pass (282/321)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.94
4 TestDownloadOnly/v1.16.0/preload-exists 0.09
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.72
9 TestDownloadOnly/v1.16.0/DeleteAll 2.09
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 1.44
12 TestDownloadOnly/v1.28.4/json-events 8.32
13 TestDownloadOnly/v1.28.4/preload-exists 0.09
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.59
18 TestDownloadOnly/v1.28.4/DeleteAll 2.36
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 1.29
21 TestDownloadOnly/v1.29.0-rc.2/json-events 7.72
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.55
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 1.93
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 1.23
29 TestDownloadOnlyKic 4.39
30 TestBinaryMirror 3.21
31 TestOffline 193.82
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.29
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.29
36 TestAddons/Setup 476.91
40 TestAddons/parallel/InspektorGadget 15.67
41 TestAddons/parallel/MetricsServer 7.93
42 TestAddons/parallel/HelmTiller 28.92
44 TestAddons/parallel/CSI 88.88
45 TestAddons/parallel/Headlamp 25.15
46 TestAddons/parallel/CloudSpanner 8.24
47 TestAddons/parallel/LocalPath 17.51
48 TestAddons/parallel/NvidiaDevicePlugin 6.76
49 TestAddons/parallel/Yakd 5.05
52 TestAddons/serial/GCPAuth/Namespaces 0.39
53 TestAddons/StoppedEnableDisable 14.4
54 TestCertOptions 86.21
55 TestCertExpiration 306.13
56 TestDockerFlags 99.99
57 TestForceSystemdFlag 111.68
58 TestForceSystemdEnv 108.46
65 TestErrorSpam/start 4.21
66 TestErrorSpam/status 3.98
67 TestErrorSpam/pause 4.39
68 TestErrorSpam/unpause 4.5
69 TestErrorSpam/stop 20.51
72 TestFunctional/serial/CopySyncFile 0.03
73 TestFunctional/serial/StartWithProxy 98.04
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 42.06
76 TestFunctional/serial/KubeContext 0.12
77 TestFunctional/serial/KubectlGetPods 0.24
80 TestFunctional/serial/CacheCmd/cache/add_remote 6.78
81 TestFunctional/serial/CacheCmd/cache/add_local 3.82
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.25
83 TestFunctional/serial/CacheCmd/cache/list 0.25
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.16
85 TestFunctional/serial/CacheCmd/cache/cache_reload 5.25
86 TestFunctional/serial/CacheCmd/cache/delete 0.49
87 TestFunctional/serial/MinikubeKubectlCmd 0.41
89 TestFunctional/serial/ExtraConfig 47.79
90 TestFunctional/serial/ComponentHealth 0.19
91 TestFunctional/serial/LogsCmd 2.56
92 TestFunctional/serial/LogsFileCmd 2.71
93 TestFunctional/serial/InvalidService 5.98
97 TestFunctional/parallel/DryRun 3.51
98 TestFunctional/parallel/InternationalLanguage 1.36
99 TestFunctional/parallel/StatusCmd 4.94
104 TestFunctional/parallel/AddonsCmd 0.95
105 TestFunctional/parallel/PersistentVolumeClaim 57.49
107 TestFunctional/parallel/SSHCmd 3.28
108 TestFunctional/parallel/CpCmd 9.79
109 TestFunctional/parallel/MySQL 81.01
110 TestFunctional/parallel/FileSync 1.53
111 TestFunctional/parallel/CertSync 8.42
115 TestFunctional/parallel/NodeLabels 0.25
117 TestFunctional/parallel/NonActiveRuntimeDisabled 1.21
119 TestFunctional/parallel/License 3.12
120 TestFunctional/parallel/ServiceCmd/DeployApp 22.57
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 2.18
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 24.18
126 TestFunctional/parallel/ServiceCmd/List 1.88
127 TestFunctional/parallel/ServiceCmd/JSONOutput 2.08
128 TestFunctional/parallel/ServiceCmd/HTTPS 15.03
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.24
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.23
135 TestFunctional/parallel/Version/short 0.29
136 TestFunctional/parallel/Version/components 3.95
137 TestFunctional/parallel/ImageCommands/ImageListShort 1.15
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.99
139 TestFunctional/parallel/ImageCommands/ImageListJson 1.07
140 TestFunctional/parallel/ImageCommands/ImageListYaml 1.14
141 TestFunctional/parallel/ImageCommands/ImageBuild 8.99
142 TestFunctional/parallel/ImageCommands/Setup 3.93
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 12.72
144 TestFunctional/parallel/ServiceCmd/Format 15.03
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.66
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 15.65
147 TestFunctional/parallel/ServiceCmd/URL 15.03
148 TestFunctional/parallel/DockerEnv/powershell 11.01
149 TestFunctional/parallel/UpdateContextCmd/no_changes 1.11
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.97
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.99
152 TestFunctional/parallel/ProfileCmd/profile_not_create 1.88
153 TestFunctional/parallel/ProfileCmd/profile_list 1.8
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.01
155 TestFunctional/parallel/ProfileCmd/profile_json_output 2.27
156 TestFunctional/parallel/ImageCommands/ImageRemove 2.74
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 10.15
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 7.14
159 TestFunctional/delete_addon-resizer_images 0.45
160 TestFunctional/delete_my-image_image 0.17
161 TestFunctional/delete_minikube_cached_images 0.17
165 TestImageBuild/serial/Setup 66.03
166 TestImageBuild/serial/NormalBuild 3.88
167 TestImageBuild/serial/BuildWithBuildArg 2.65
168 TestImageBuild/serial/BuildWithDockerIgnore 2.17
169 TestImageBuild/serial/BuildWithSpecifiedDockerfile 2.75
177 TestJSONOutput/start/Command 79.54
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 1.6
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 1.42
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 7.28
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 1.35
202 TestKicCustomNetwork/create_custom_network 76.71
203 TestKicCustomNetwork/use_default_bridge_network 75.32
204 TestKicExistingNetwork 76.78
205 TestKicCustomSubnet 75.53
206 TestKicStaticIP 78.59
207 TestMainNoArgs 0.22
208 TestMinikubeProfile 141.38
211 TestMountStart/serial/StartWithMountFirst 19.04
212 TestMountStart/serial/VerifyMountFirst 1.08
213 TestMountStart/serial/StartWithMountSecond 18.34
214 TestMountStart/serial/VerifyMountSecond 1.05
215 TestMountStart/serial/DeleteFirst 3.84
216 TestMountStart/serial/VerifyMountPostDelete 1.08
217 TestMountStart/serial/Stop 2.45
218 TestMountStart/serial/RestartStopped 13
219 TestMountStart/serial/VerifyMountPostStop 1.05
222 TestMultiNode/serial/FreshStart2Nodes 146.11
223 TestMultiNode/serial/DeployApp2Nodes 26.06
224 TestMultiNode/serial/PingHostFrom2Pods 2.36
225 TestMultiNode/serial/AddNode 52.77
226 TestMultiNode/serial/MultiNodeLabels 0.18
227 TestMultiNode/serial/ProfileList 1.25
228 TestMultiNode/serial/CopyFile 39.17
229 TestMultiNode/serial/StopNode 6.44
230 TestMultiNode/serial/StartAfterStop 23.84
231 TestMultiNode/serial/RestartKeepsNodes 142.31
232 TestMultiNode/serial/DeleteNode 11.15
233 TestMultiNode/serial/StopMultiNode 25.26
234 TestMultiNode/serial/RestartMultiNode 105.7
235 TestMultiNode/serial/ValidateNameConflict 71.78
239 TestPreload 202.06
240 TestScheduledStopWindows 136.26
244 TestInsufficientStorage 49.05
245 TestRunningBinaryUpgrade 376.42
248 TestMissingContainerUpgrade 308.23
250 TestStoppedBinaryUpgrade/Setup 0.99
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.32
252 TestNoKubernetes/serial/StartWithK8s 120.08
253 TestStoppedBinaryUpgrade/Upgrade 361.18
254 TestNoKubernetes/serial/StartWithStopK8s 69.54
255 TestNoKubernetes/serial/Start 34.13
264 TestPause/serial/Start 127.8
265 TestNoKubernetes/serial/VerifyK8sNotRunning 1.66
266 TestNoKubernetes/serial/ProfileList 14.22
267 TestNoKubernetes/serial/Stop 2.99
268 TestNoKubernetes/serial/StartNoArgs 16.72
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 1.45
270 TestPause/serial/SecondStartNoReconfiguration 51.88
271 TestStoppedBinaryUpgrade/MinikubeLogs 3.55
272 TestPause/serial/Pause 2.7
273 TestPause/serial/VerifyStatus 1.97
285 TestPause/serial/Unpause 2.1
286 TestPause/serial/PauseAgain 3.83
287 TestPause/serial/DeletePaused 10
288 TestPause/serial/VerifyDeletedResources 4.69
292 TestStartStop/group/no-preload/serial/FirstStart 121.28
293 TestStartStop/group/no-preload/serial/DeployApp 8.71
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.38
295 TestStartStop/group/no-preload/serial/Stop 12.58
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 1.21
297 TestStartStop/group/no-preload/serial/SecondStart 363.46
299 TestStartStop/group/embed-certs/serial/FirstStart 84.47
300 TestStartStop/group/embed-certs/serial/DeployApp 8.73
301 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.55
302 TestStartStop/group/embed-certs/serial/Stop 12.27
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 1.05
304 TestStartStop/group/embed-certs/serial/SecondStart 357.99
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.03
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 27.03
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.57
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.51
313 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.84
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.88
315 TestStartStop/group/no-preload/serial/Pause 10.8
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.28
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 361.29
319 TestStartStop/group/newest-cni/serial/FirstStart 83.27
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 44.51
321 TestStartStop/group/old-k8s-version/serial/Stop 4.43
322 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 1.42
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.27
326 TestStartStop/group/newest-cni/serial/Stop 9.73
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.15
328 TestStartStop/group/newest-cni/serial/SecondStart 61.57
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.49
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.94
331 TestStartStop/group/embed-certs/serial/Pause 9.45
332 TestNetworkPlugins/group/auto/Start 100.66
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.89
336 TestStartStop/group/newest-cni/serial/Pause 10.52
337 TestNetworkPlugins/group/kindnet/Start 103.89
338 TestNetworkPlugins/group/auto/KubeletFlags 1.15
339 TestNetworkPlugins/group/auto/NetCatPod 16.71
340 TestNetworkPlugins/group/auto/DNS 0.35
341 TestNetworkPlugins/group/auto/Localhost 0.33
342 TestNetworkPlugins/group/auto/HairPin 0.32
343 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
344 TestNetworkPlugins/group/kindnet/KubeletFlags 1.29
345 TestNetworkPlugins/group/kindnet/NetCatPod 19.67
346 TestNetworkPlugins/group/kindnet/DNS 0.39
347 TestNetworkPlugins/group/calico/Start 184.07
348 TestNetworkPlugins/group/kindnet/Localhost 0.35
349 TestNetworkPlugins/group/kindnet/HairPin 0.34
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 24.3
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.73
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.16
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 11.87
354 TestNetworkPlugins/group/custom-flannel/Start 118.01
355 TestNetworkPlugins/group/false/Start 97.1
356 TestNetworkPlugins/group/calico/ControllerPod 6.03
357 TestNetworkPlugins/group/custom-flannel/KubeletFlags 1.34
358 TestNetworkPlugins/group/false/KubeletFlags 1.52
359 TestNetworkPlugins/group/custom-flannel/NetCatPod 25.91
360 TestNetworkPlugins/group/calico/KubeletFlags 1.68
361 TestNetworkPlugins/group/false/NetCatPod 25.92
362 TestNetworkPlugins/group/calico/NetCatPod 23.82
363 TestNetworkPlugins/group/calico/DNS 0.34
364 TestNetworkPlugins/group/calico/Localhost 0.35
365 TestNetworkPlugins/group/custom-flannel/DNS 0.4
366 TestNetworkPlugins/group/calico/HairPin 0.45
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.37
368 TestNetworkPlugins/group/false/DNS 0.48
369 TestNetworkPlugins/group/custom-flannel/HairPin 0.49
370 TestNetworkPlugins/group/false/Localhost 0.44
371 TestNetworkPlugins/group/false/HairPin 0.43
372 TestNetworkPlugins/group/enable-default-cni/Start 107.14
374 TestNetworkPlugins/group/bridge/Start 113.58
375 TestNetworkPlugins/group/kubenet/Start 130.81
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 1.18
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 21.69
378 TestNetworkPlugins/group/bridge/KubeletFlags 1.25
379 TestNetworkPlugins/group/bridge/NetCatPod 17.66
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.38
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.32
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.34
383 TestNetworkPlugins/group/bridge/DNS 0.35
384 TestNetworkPlugins/group/bridge/Localhost 0.33
385 TestNetworkPlugins/group/bridge/HairPin 0.32
386 TestNetworkPlugins/group/kubenet/KubeletFlags 1.14
387 TestNetworkPlugins/group/kubenet/NetCatPod 17.65
388 TestNetworkPlugins/group/kubenet/DNS 0.33
389 TestNetworkPlugins/group/kubenet/Localhost 0.3
390 TestNetworkPlugins/group/kubenet/HairPin 0.29
x
+
TestDownloadOnly/v1.16.0/json-events (8.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-603400 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-603400 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (8.9428078s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-603400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-603400: exit status 85 (720.578ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-603400 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |          |
	|         | -p download-only-603400        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:38:12
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:38:12.741180   11664 out.go:291] Setting OutFile to fd 664 ...
	I0229 17:38:12.742374   11664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:12.742374   11664 out.go:304] Setting ErrFile to fd 668...
	I0229 17:38:12.742374   11664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 17:38:12.755407   11664 root.go:314] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0229 17:38:12.766872   11664 out.go:298] Setting JSON to true
	I0229 17:38:12.769664   11664 start.go:129] hostinfo: {"hostname":"minikube7","uptime":6253,"bootTime":1709222039,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 17:38:12.769664   11664 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 17:38:12.785918   11664 out.go:97] [download-only-603400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	W0229 17:38:12.787113   11664 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0229 17:38:12.789889   11664 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 17:38:12.787439   11664 notify.go:220] Checking for updates...
	I0229 17:38:12.794233   11664 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 17:38:12.797556   11664 out.go:169] MINIKUBE_LOCATION=18259
	I0229 17:38:12.799906   11664 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0229 17:38:12.806991   11664 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:38:12.808362   11664 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:38:13.100256   11664 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 17:38:13.110058   11664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 17:38:14.302659   11664 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.1925931s)
	I0229 17:38:14.303566   11664 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:76 SystemTime:2024-02-29 17:38:14.247607941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 17:38:14.308842   11664 out.go:97] Using the docker driver based on user configuration
	I0229 17:38:14.308892   11664 start.go:299] selected driver: docker
	I0229 17:38:14.308892   11664 start.go:903] validating driver "docker" against <nil>
	I0229 17:38:14.324065   11664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 17:38:14.660634   11664 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:76 SystemTime:2024-02-29 17:38:14.618042756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 17:38:14.660634   11664 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:38:14.767604   11664 start_flags.go:394] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0229 17:38:14.768329   11664 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:38:14.771061   11664 out.go:169] Using Docker Desktop driver with root privileges
	I0229 17:38:14.774282   11664 cni.go:84] Creating CNI manager for ""
	I0229 17:38:14.774282   11664 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 17:38:14.774282   11664 start_flags.go:323] config:
	{Name:download-only-603400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-603400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:38:14.777604   11664 out.go:97] Starting control plane node download-only-603400 in cluster download-only-603400
	I0229 17:38:14.777604   11664 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 17:38:14.779229   11664 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0229 17:38:14.780197   11664 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 17:38:14.780197   11664 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 17:38:14.820736   11664 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 17:38:14.820854   11664 cache.go:56] Caching tarball of preloaded images
	I0229 17:38:14.821454   11664 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 17:38:14.824018   11664 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0229 17:38:14.824018   11664 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:38:14.892605   11664 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 17:38:14.940537   11664 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0229 17:38:14.940537   11664 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.42-1708944392-18244@sha256_8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08.tar
	I0229 17:38:14.940537   11664 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.42-1708944392-18244@sha256_8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08.tar
	I0229 17:38:14.941195   11664 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0229 17:38:14.942018   11664 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-603400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:38:21.720663    8216 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (2.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.0860946s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (2.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (1.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-603400
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-603400: (1.4342352s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (1.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (8.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-142600 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-142600 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker: (8.3227653s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (8.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-142600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-142600: exit status 85 (586.7816ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-603400 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-603400        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| delete  | -p download-only-603400        | download-only-603400 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| start   | -o=json --download-only        | download-only-142600 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-142600        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:38:26
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:38:26.044503   10192 out.go:291] Setting OutFile to fd 568 ...
	I0229 17:38:26.045480   10192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:26.045480   10192 out.go:304] Setting ErrFile to fd 688...
	I0229 17:38:26.045480   10192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:26.066243   10192 out.go:298] Setting JSON to true
	I0229 17:38:26.069354   10192 start.go:129] hostinfo: {"hostname":"minikube7","uptime":6266,"bootTime":1709222039,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 17:38:26.069354   10192 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 17:38:26.331426   10192 out.go:97] [download-only-142600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 17:38:26.332713   10192 notify.go:220] Checking for updates...
	I0229 17:38:26.336844   10192 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 17:38:26.343483   10192 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 17:38:26.350333   10192 out.go:169] MINIKUBE_LOCATION=18259
	I0229 17:38:26.357742   10192 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0229 17:38:26.366537   10192 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:38:26.367585   10192 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:38:26.655655   10192 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 17:38:26.665278   10192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 17:38:26.999944   10192 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:76 SystemTime:2024-02-29 17:38:26.96093074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0229 17:38:27.003701   10192 out.go:97] Using the docker driver based on user configuration
	I0229 17:38:27.004355   10192 start.go:299] selected driver: docker
	I0229 17:38:27.004355   10192 start.go:903] validating driver "docker" against <nil>
	I0229 17:38:27.021160   10192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 17:38:27.361145   10192 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:76 SystemTime:2024-02-29 17:38:27.317945976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 17:38:27.361996   10192 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:38:27.408664   10192 start_flags.go:394] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0229 17:38:27.409696   10192 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:38:27.872696   10192 out.go:169] Using Docker Desktop driver with root privileges
	I0229 17:38:27.877370   10192 cni.go:84] Creating CNI manager for ""
	I0229 17:38:27.877370   10192 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 17:38:27.877486   10192 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:38:27.877565   10192 start_flags.go:323] config:
	{Name:download-only-142600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-142600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:38:27.880919   10192 out.go:97] Starting control plane node download-only-142600 in cluster download-only-142600
	I0229 17:38:27.881051   10192 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 17:38:27.884930   10192 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0229 17:38:27.884930   10192 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 17:38:27.884930   10192 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 17:38:27.929964   10192 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 17:38:27.929964   10192 cache.go:56] Caching tarball of preloaded images
	I0229 17:38:27.930970   10192 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 17:38:27.933814   10192 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0229 17:38:27.933923   10192 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:38:27.998220   10192 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 17:38:28.067664   10192 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0229 17:38:28.067664   10192 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.42-1708944392-18244@sha256_8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08.tar
	I0229 17:38:28.067664   10192 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.42-1708944392-18244@sha256_8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08.tar
	I0229 17:38:28.067664   10192 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0229 17:38:28.068396   10192 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0229 17:38:28.068502   10192 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0229 17:38:28.068603   10192 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-142600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:38:34.372764    5896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (2.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.3595465s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (2.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-142600
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-142600: (1.2893676s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (7.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-035900 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-035900 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker: (7.7223004s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (7.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-035900
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-035900: exit status 85 (547.875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-603400 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-603400           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=docker                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| delete  | -p download-only-603400           | download-only-603400 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| start   | -o=json --download-only           | download-only-142600 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-142600           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=docker                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| delete  | -p download-only-142600           | download-only-142600 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| start   | -o=json --download-only           | download-only-035900 | minikube7\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-035900           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=docker                   |                      |                   |         |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:38:38
	Running on machine: minikube7
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:38:38.697409   13588 out.go:291] Setting OutFile to fd 852 ...
	I0229 17:38:38.698017   13588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:38.698017   13588 out.go:304] Setting ErrFile to fd 856...
	I0229 17:38:38.698017   13588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:38.721390   13588 out.go:298] Setting JSON to true
	I0229 17:38:38.725366   13588 start.go:129] hostinfo: {"hostname":"minikube7","uptime":6278,"bootTime":1709222039,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 17:38:38.725513   13588 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 17:38:38.733897   13588 out.go:97] [download-only-035900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 17:38:38.733897   13588 notify.go:220] Checking for updates...
	I0229 17:38:38.736957   13588 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 17:38:38.743252   13588 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 17:38:38.749538   13588 out.go:169] MINIKUBE_LOCATION=18259
	I0229 17:38:38.756684   13588 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0229 17:38:38.763602   13588 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:38:38.764219   13588 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:38:39.056379   13588 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 17:38:39.066680   13588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 17:38:39.400138   13588 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:76 SystemTime:2024-02-29 17:38:39.35854222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:
Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warning
s:<nil>}}
	I0229 17:38:39.584003   13588 out.go:97] Using the docker driver based on user configuration
	I0229 17:38:39.584658   13588 start.go:299] selected driver: docker
	I0229 17:38:39.584658   13588 start.go:903] validating driver "docker" against <nil>
	I0229 17:38:39.599673   13588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 17:38:39.948512   13588 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:76 SystemTime:2024-02-29 17:38:39.905982574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 17:38:39.949256   13588 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:38:40.003301   13588 start_flags.go:394] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0229 17:38:40.004363   13588 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:38:40.074684   13588 out.go:169] Using Docker Desktop driver with root privileges
	I0229 17:38:40.080970   13588 cni.go:84] Creating CNI manager for ""
	I0229 17:38:40.080970   13588 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 17:38:40.081355   13588 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:38:40.081355   13588 start_flags.go:323] config:
	{Name:download-only-035900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-035900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:38:40.084887   13588 out.go:97] Starting control plane node download-only-035900 in cluster download-only-035900
	I0229 17:38:40.085003   13588 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 17:38:40.088103   13588 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0229 17:38:40.088103   13588 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 17:38:40.088103   13588 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 17:38:40.134409   13588 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 17:38:40.134409   13588 cache.go:56] Caching tarball of preloaded images
	I0229 17:38:40.135429   13588 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 17:38:40.138860   13588 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0229 17:38:40.138971   13588 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:38:40.198944   13588 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 17:38:40.257014   13588 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0229 17:38:40.257226   13588 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.42-1708944392-18244@sha256_8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08.tar
	I0229 17:38:40.257226   13588 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.42-1708944392-18244@sha256_8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08.tar
	I0229 17:38:40.257226   13588 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0229 17:38:40.257226   13588 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0229 17:38:40.257811   13588 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0229 17:38:40.258022   13588 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-035900"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:38:46.334741    8468 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.9342021s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-035900
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-035900: (1.2309477s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.23s)

                                                
                                    
x
+
TestDownloadOnlyKic (4.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-495900 --alsologtostderr --driver=docker
aaa_download_only_test.go:232: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-495900 --alsologtostderr --driver=docker: (1.5485178s)
helpers_test.go:175: Cleaning up "download-docker-495900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-495900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-495900: (1.9275548s)
--- PASS: TestDownloadOnlyKic (4.39s)

                                                
                                    
x
+
TestBinaryMirror (3.21s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-792800 --alsologtostderr --binary-mirror http://127.0.0.1:56431 --driver=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-792800 --alsologtostderr --binary-mirror http://127.0.0.1:56431 --driver=docker: (1.6931507s)
helpers_test.go:175: Cleaning up "binary-mirror-792800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-792800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-792800: (1.2838439s)
--- PASS: TestBinaryMirror (3.21s)

                                                
                                    
x
+
TestOffline (193.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-130400 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-130400 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (3m6.5970363s)
helpers_test.go:175: Cleaning up "offline-docker-130400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-130400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-130400: (7.2223974s)
--- PASS: TestOffline (193.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-850800
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-850800: exit status 85 (287.618ms)

                                                
                                                
-- stdout --
	* Profile "addons-850800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-850800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:39:01.367293    6260 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-850800
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-850800: exit status 85 (291.6488ms)

                                                
                                                
-- stdout --
	* Profile "addons-850800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-850800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:39:01.364295   11572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/Setup (476.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-850800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-850800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m56.9117868s)
--- PASS: TestAddons/Setup (476.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (15.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xqlzp" [9edd6519-e880-474f-9696-6fac8d59ac00] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0291927s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-850800
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-850800: (10.6342061s)
--- PASS: TestAddons/parallel/InspektorGadget (15.67s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 32.2473ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-vlj7j" [1dea0492-332e-472d-922e-bf6d1fd46171] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0304038s
addons_test.go:415: (dbg) Run:  kubectl --context addons-850800 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-850800 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-850800 addons disable metrics-server --alsologtostderr -v=1: (2.6768362s)
--- PASS: TestAddons/parallel/MetricsServer (7.93s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (28.92s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 64.916ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-48g9g" [6425318d-3f63-41a4-8ed6-abead1f3edc1] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0883481s
addons_test.go:473: (dbg) Run:  kubectl --context addons-850800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-850800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (21.3115424s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-850800 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-850800 addons disable helm-tiller --alsologtostderr -v=1: (2.438261s)
--- PASS: TestAddons/parallel/HelmTiller (28.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (88.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 37.6988ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-850800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-850800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3edd872d-11df-46ef-89db-2dd738235906] Pending
helpers_test.go:344: "task-pv-pod" [3edd872d-11df-46ef-89db-2dd738235906] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3edd872d-11df-46ef-89db-2dd738235906] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 32.0126198s
addons_test.go:584: (dbg) Run:  kubectl --context addons-850800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-850800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-850800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-850800 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-850800 delete pod task-pv-pod: (2.3926851s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-850800 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-850800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-850800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6554524d-a03d-4759-8535-890d4d65e0cf] Pending
helpers_test.go:344: "task-pv-pod-restore" [6554524d-a03d-4759-8535-890d4d65e0cf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6554524d-a03d-4759-8535-890d4d65e0cf] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 16.0193212s
addons_test.go:626: (dbg) Run:  kubectl --context addons-850800 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-850800 delete pod task-pv-pod-restore: (1.1675907s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-850800 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-850800 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-850800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-850800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.3302407s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-850800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-850800 addons disable volumesnapshots --alsologtostderr -v=1: (2.0703638s)
--- PASS: TestAddons/parallel/CSI (88.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (25.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-850800 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-850800 --alsologtostderr -v=1: (3.1229173s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-s2rnp" [60c586f8-a042-4c10-bc57-44b37578b4dc] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-s2rnp" [60c586f8-a042-4c10-bc57-44b37578b4dc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-s2rnp" [60c586f8-a042-4c10-bc57-44b37578b4dc] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.0192593s
--- PASS: TestAddons/parallel/Headlamp (25.15s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (8.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-77hb9" [34cf1700-bc07-4d61-9a4a-f2f6aa69dc0d] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0112262s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-850800
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-850800: (2.1963993s)
--- PASS: TestAddons/parallel/CloudSpanner (8.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (17.51s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-850800 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-850800 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-850800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f2509000-6f70-4ec3-b946-feb405d31151] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f2509000-6f70-4ec3-b946-feb405d31151] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f2509000-6f70-4ec3-b946-feb405d31151] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0205165s
addons_test.go:891: (dbg) Run:  kubectl --context addons-850800 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-850800 ssh "cat /opt/local-path-provisioner/pvc-04e4d91b-9066-49ef-ac7d-7fe2bca31e4e_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-850800 ssh "cat /opt/local-path-provisioner/pvc-04e4d91b-9066-49ef-ac7d-7fe2bca31e4e_default_test-pvc/file1": (1.7183571s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-850800 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-850800 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-850800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-850800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1.333395s)
--- PASS: TestAddons/parallel/LocalPath (17.51s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-q65fz" [c8c0ab75-0c21-4121-89ef-652f31ebf3a0] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0144749s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-850800
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-850800: (1.7452587s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-4xrdl" [0caaeff1-5420-42f1-af15-36ab4966799e] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0456378s
--- PASS: TestAddons/parallel/Yakd (5.05s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.39s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-850800 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-850800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (14.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-850800
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-850800: (12.8134338s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-850800
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-850800
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-850800
--- PASS: TestAddons/StoppedEnableDisable (14.40s)

                                                
                                    
x
+
TestCertOptions (86.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-476400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-476400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m16.8685567s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-476400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-476400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.2117578s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-476400 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-476400 -- "sudo cat /etc/kubernetes/admin.conf": (1.2405026s)
helpers_test.go:175: Cleaning up "cert-options-476400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-476400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-476400: (6.7124903s)
--- PASS: TestCertOptions (86.21s)

                                                
                                    
x
+
TestCertExpiration (306.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-080100 --memory=2048 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-080100 --memory=2048 --cert-expiration=3m --driver=docker: (1m28.3670783s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-080100 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-080100 --memory=2048 --cert-expiration=8760h --driver=docker: (32.0802953s)
helpers_test.go:175: Cleaning up "cert-expiration-080100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-080100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-080100: (5.6655167s)
--- PASS: TestCertExpiration (306.13s)

                                                
                                    
x
+
TestDockerFlags (99.99s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-330600 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-330600 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m27.4292091s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-330600 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-330600 ssh "sudo systemctl show docker --property=Environment --no-pager": (1.210694s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-330600 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-330600 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.3814161s)
helpers_test.go:175: Cleaning up "docker-flags-330600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-330600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-330600: (9.9724827s)
--- PASS: TestDockerFlags (99.99s)

                                                
                                    
x
+
TestForceSystemdFlag (111.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-746400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-746400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m41.9012698s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-746400 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-746400 ssh "docker info --format {{.CgroupDriver}}": (1.3799992s)
helpers_test.go:175: Cleaning up "force-systemd-flag-746400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-746400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-746400: (8.4011076s)
--- PASS: TestForceSystemdFlag (111.68s)

                                                
                                    
x
+
TestForceSystemdEnv (108.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-439500 --memory=2048 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-439500 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m39.382421s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-439500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-439500 ssh "docker info --format {{.CgroupDriver}}": (1.376993s)
helpers_test.go:175: Cleaning up "force-systemd-env-439500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-439500
E0229 18:49:41.127776    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-439500: (7.7021718s)
--- PASS: TestForceSystemdEnv (108.46s)

                                                
                                    
x
+
TestErrorSpam/start (4.21s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 start --dry-run: (1.3988222s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 start --dry-run: (1.3963148s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 start --dry-run: (1.3826507s)
--- PASS: TestErrorSpam/start (4.21s)

                                                
                                    
x
+
TestErrorSpam/status (3.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 status: (1.3172538s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 status: (1.2263043s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 status: (1.4353993s)
--- PASS: TestErrorSpam/status (3.98s)

                                                
                                    
x
+
TestErrorSpam/pause (4.39s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 pause: (2.0121872s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 pause: (1.2526521s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 pause: (1.1253337s)
--- PASS: TestErrorSpam/pause (4.39s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 unpause: (1.3761467s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 unpause: (1.4373888s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 unpause: (1.6853709s)
--- PASS: TestErrorSpam/unpause (4.50s)

                                                
                                    
x
+
TestErrorSpam/stop (20.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 stop: (12.4064231s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 stop: (4.2013983s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-059300 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-059300 stop: (3.879515s)
--- PASS: TestErrorSpam/stop (20.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\5660\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-686300 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0229 17:51:58.493279    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 17:51:58.514331    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 17:51:58.532329    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 17:51:58.559439    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 17:51:58.603024    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 17:51:58.696115    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 17:51:58.865127    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 17:51:59.193720    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 17:51:59.840893    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 17:52:01.130844    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 17:52:03.706372    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 17:52:08.833560    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 17:52:19.075602    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-686300 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m38.0290359s)
--- PASS: TestFunctional/serial/StartWithProxy (98.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-686300 --alsologtostderr -v=8
E0229 17:52:39.573006    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-686300 --alsologtostderr -v=8: (42.0585048s)
functional_test.go:659: soft start took 42.0601932s for "functional-686300" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.12s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-686300 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 cache add registry.k8s.io/pause:3.1: (2.3030149s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 cache add registry.k8s.io/pause:3.3
E0229 17:53:20.547000    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 cache add registry.k8s.io/pause:3.3: (2.2005828s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 cache add registry.k8s.io/pause:latest: (2.2708073s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-686300 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2894482363\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-686300 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2894482363\001: (1.725531s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 cache add minikube-local-cache-test:functional-686300
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 cache add minikube-local-cache-test:functional-686300: (1.6533219s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 cache delete minikube-local-cache-test:functional-686300
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-686300
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh sudo crictl images: (1.1604509s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh sudo docker rmi registry.k8s.io/pause:latest: (1.126032s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-686300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (1.1824306s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:53:30.036753    8696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 cache reload: (1.7721123s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (1.1552302s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (5.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 kubectl -- --context functional-686300 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-686300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-686300 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.7896649s)
functional_test.go:757: restart took 47.7899295s for "functional-686300" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (47.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-686300 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 logs: (2.5570167s)
--- PASS: TestFunctional/serial/LogsCmd (2.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2990380708\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2990380708\001\logs.txt: (2.7063938s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-686300 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-686300
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-686300: exit status 115 (1.6685217s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30746 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:54:38.103207    8552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_service_c9bf6787273d25f6c9d72c0b156373dea6a4fe44_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-686300 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-686300 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-686300 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.6095538s)

                                                
                                                
-- stdout --
	* [functional-686300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:55:42.817683   14364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 17:55:42.916196   14364 out.go:291] Setting OutFile to fd 1244 ...
	I0229 17:55:42.916916   14364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:55:42.916916   14364 out.go:304] Setting ErrFile to fd 1348...
	I0229 17:55:42.916916   14364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:55:42.942611   14364 out.go:298] Setting JSON to false
	I0229 17:55:42.946646   14364 start.go:129] hostinfo: {"hostname":"minikube7","uptime":7303,"bootTime":1709222039,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 17:55:42.946646   14364 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 17:55:42.950678   14364 out.go:177] * [functional-686300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 17:55:42.954299   14364 notify.go:220] Checking for updates...
	I0229 17:55:42.957095   14364 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 17:55:42.960412   14364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:55:42.962877   14364 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 17:55:42.965583   14364 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:55:42.968229   14364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:55:42.971774   14364 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 17:55:42.972811   14364 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:55:43.415353   14364 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 17:55:43.435349   14364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 17:55:44.107152   14364 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:86 SystemTime:2024-02-29 17:55:44.015641539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 17:55:44.124142   14364 out.go:177] * Using the docker driver based on existing profile
	I0229 17:55:44.127381   14364 start.go:299] selected driver: docker
	I0229 17:55:44.127381   14364 start.go:903] validating driver "docker" against &{Name:functional-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-686300 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:55:44.127436   14364 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:55:44.209628   14364 out.go:177] 
	W0229 17:55:44.212442   14364 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0229 17:55:44.215615   14364 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-686300 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:987: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-686300 --dry-run --alsologtostderr -v=1 --driver=docker: (1.9024407s)
--- PASS: TestFunctional/parallel/DryRun (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-686300 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-686300 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.3596071s)

                                                
                                                
-- stdout --
	* [functional-686300] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:55:41.471449    6544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 17:55:41.558839    6544 out.go:291] Setting OutFile to fd 1500 ...
	I0229 17:55:41.559837    6544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:55:41.559837    6544 out.go:304] Setting ErrFile to fd 1492...
	I0229 17:55:41.559837    6544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:55:41.587492    6544 out.go:298] Setting JSON to false
	I0229 17:55:41.591969    6544 start.go:129] hostinfo: {"hostname":"minikube7","uptime":7301,"bootTime":1709222039,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0229 17:55:41.592241    6544 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 17:55:41.608227    6544 out.go:177] * [functional-686300] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 17:55:41.611625    6544 notify.go:220] Checking for updates...
	I0229 17:55:41.614417    6544 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0229 17:55:41.617079    6544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:55:41.622115    6544 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0229 17:55:41.625404    6544 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:55:41.631678    6544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:55:41.635692    6544 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 17:55:41.636435    6544 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:55:42.007283    6544 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 17:55:42.016622    6544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 17:55:42.539124    6544 info.go:266] docker info: {ID:0ef20c77-55be-412e-b76a-bd4063eba5cd Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:89 SystemTime:2024-02-29 17:55:42.457890182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:10 KernelVersion:5.15.133.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657516032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor
:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnin
gs:<nil>}}
	I0229 17:55:42.548187    6544 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0229 17:55:42.553223    6544 start.go:299] selected driver: docker
	I0229 17:55:42.553306    6544 start.go:903] validating driver "docker" against &{Name:functional-686300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-686300 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:55:42.553606    6544 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:55:42.616725    6544 out.go:177] 
	W0229 17:55:42.620213    6544 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0229 17:55:42.624278    6544 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (4.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 status: (1.5580193s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (1.7051281s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 status -o json: (1.6810918s)
--- PASS: TestFunctional/parallel/StatusCmd (4.94s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (57.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e194a7c7-6279-4253-9460-70ca0f741a14] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0232595s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-686300 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-686300 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-686300 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-686300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4b5e3d23-05b6-4a66-b063-47facffd79f0] Pending
helpers_test.go:344: "sp-pod" [4b5e3d23-05b6-4a66-b063-47facffd79f0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4b5e3d23-05b6-4a66-b063-47facffd79f0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 39.0172679s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-686300 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-686300 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-686300 delete -f testdata/storage-provisioner/pod.yaml: (1.9226237s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-686300 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dcdcb8bd-401b-4357-a283-5786960905cd] Pending
helpers_test.go:344: "sp-pod" [dcdcb8bd-401b-4357-a283-5786960905cd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dcdcb8bd-401b-4357-a283-5786960905cd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0209618s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-686300 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (57.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh "echo hello": (1.6184805s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh "cat /etc/hostname": (1.661982s)
--- PASS: TestFunctional/parallel/SSHCmd (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (9.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 cp testdata\cp-test.txt /home/docker/cp-test.txt: (1.4258925s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh -n functional-686300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh -n functional-686300 "sudo cat /home/docker/cp-test.txt": (1.7155266s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 cp functional-686300:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd1067869032\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 cp functional-686300:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd1067869032\001\cp-test.txt: (1.9819408s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh -n functional-686300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh -n functional-686300 "sudo cat /home/docker/cp-test.txt": (1.8870436s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (1.1046499s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh -n functional-686300 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh -n functional-686300 "sudo cat /tmp/does/not/exist/cp-test.txt": (1.6663271s)
--- PASS: TestFunctional/parallel/CpCmd (9.79s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (81.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-686300 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-7zwjp" [af97acf0-e94d-433a-b82d-fd73e78559f6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-7zwjp" [af97acf0-e94d-433a-b82d-fd73e78559f6] Running
E0229 17:56:58.475408    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m4.0220805s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-686300 exec mysql-859648c796-7zwjp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-686300 exec mysql-859648c796-7zwjp -- mysql -ppassword -e "show databases;": exit status 1 (493.8323ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-686300 exec mysql-859648c796-7zwjp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-686300 exec mysql-859648c796-7zwjp -- mysql -ppassword -e "show databases;": exit status 1 (301.1108ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-686300 exec mysql-859648c796-7zwjp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-686300 exec mysql-859648c796-7zwjp -- mysql -ppassword -e "show databases;": exit status 1 (291.4881ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-686300 exec mysql-859648c796-7zwjp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-686300 exec mysql-859648c796-7zwjp -- mysql -ppassword -e "show databases;": exit status 1 (294.6291ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-686300 exec mysql-859648c796-7zwjp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-686300 exec mysql-859648c796-7zwjp -- mysql -ppassword -e "show databases;": exit status 1 (287.5237ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-686300 exec mysql-859648c796-7zwjp -- mysql -ppassword -e "show databases;"
E0229 17:57:26.326447    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (81.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/5660/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /etc/test/nested/copy/5660/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /etc/test/nested/copy/5660/hosts": (1.5249797s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/5660.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /etc/ssl/certs/5660.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /etc/ssl/certs/5660.pem": (1.2440433s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/5660.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /usr/share/ca-certificates/5660.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /usr/share/ca-certificates/5660.pem": (1.6876233s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /etc/ssl/certs/51391683.0": (1.4932833s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/56602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /etc/ssl/certs/56602.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /etc/ssl/certs/56602.pem": (1.2100077s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/56602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /usr/share/ca-certificates/56602.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /usr/share/ca-certificates/56602.pem": (1.2634725s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (1.5178683s)
--- PASS: TestFunctional/parallel/CertSync (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-686300 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-686300 ssh "sudo systemctl is-active crio": exit status 1 (1.2022067s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:55:12.714797    2436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.1090193s)
--- PASS: TestFunctional/parallel/License (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-686300 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-686300 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-zrdjg" [699e497d-4795-4848-aab9-074608e23cf8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-zrdjg" [699e497d-4795-4848-aab9-074608e23cf8] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.0159488s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-686300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-686300 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-686300 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-686300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9424: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 14928: OpenProcess: The parameter is incorrect.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-686300 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (24.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-686300 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Done: kubectl --context functional-686300 apply -f testdata\testsvc.yaml: (1.0792835s)
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [01c21a4f-d9a7-4380-bb22-f9d87c7d8ec8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [01c21a4f-d9a7-4380-bb22-f9d87c7d8ec8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 23.0664767s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (24.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 service list: (1.8789109s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 service list -o json: (2.0754918s)
functional_test.go:1490: Took "2.0759521s" to run "out/minikube-windows-amd64.exe -p functional-686300 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-686300 service --namespace=default --https --url hello-node: exit status 1 (15.0307404s)

                                                
                                                
-- stdout --
	https://127.0.0.1:57436

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:55:07.228062    9024 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:57436
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-686300 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-686300 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11200: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 6920: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 version --short
--- PASS: TestFunctional/parallel/Version/short (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (3.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 version -o=json --components: (3.9533321s)
--- PASS: TestFunctional/parallel/Version/components (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image ls --format short --alsologtostderr: (1.1498791s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-686300 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-686300
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-686300
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-686300 image ls --format short --alsologtostderr:
W0229 17:56:17.977467    5308 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 17:56:18.118490    5308 out.go:291] Setting OutFile to fd 848 ...
I0229 17:56:18.136237    5308 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:56:18.136237    5308 out.go:304] Setting ErrFile to fd 1464...
I0229 17:56:18.136237    5308 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:56:18.159101    5308 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:56:18.159773    5308 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:56:18.183585    5308 cli_runner.go:164] Run: docker container inspect functional-686300 --format={{.State.Status}}
I0229 17:56:18.442254    5308 ssh_runner.go:195] Run: systemctl --version
I0229 17:56:18.459352    5308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
I0229 17:56:18.667003    5308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57190 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-686300\id_rsa Username:docker}
I0229 17:56:18.838883    5308 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-686300 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | alpine            | 6913ed9ec8d00 | 42.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| gcr.io/google-containers/addon-resizer      | functional-686300 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-686300 | 7f3cbf89ad42c | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | e4720093a3c13 | 187MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-686300 image ls --format table --alsologtostderr:
W0229 17:56:20.179691   13352 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 17:56:20.288110   13352 out.go:291] Setting OutFile to fd 1512 ...
I0229 17:56:20.289100   13352 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:56:20.289163   13352 out.go:304] Setting ErrFile to fd 1508...
I0229 17:56:20.289163   13352 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:56:20.309453   13352 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:56:20.311091   13352 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:56:20.339323   13352 cli_runner.go:164] Run: docker container inspect functional-686300 --format={{.State.Status}}
I0229 17:56:20.564541   13352 ssh_runner.go:195] Run: systemctl --version
I0229 17:56:20.577144   13352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
I0229 17:56:20.790237   13352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57190 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-686300\id_rsa Username:docker}
I0229 17:56:20.940921   13352 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image ls --format json --alsologtostderr: (1.0666753s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-686300 image ls --format json --alsologtostderr:
[{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-686300"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d86
7d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/
echoserver:1.8"],"size":"95400000"},{"id":"7f3cbf89ad42c57f6bca09b606754b5536721a43695b14eeb72bbe6c6943ec44","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-686300"],"size":"30"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-686300 image ls --format json --alsologtostderr:
W0229 17:56:19.112199    4432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 17:56:19.235909    4432 out.go:291] Setting OutFile to fd 1376 ...
I0229 17:56:19.236452    4432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:56:19.236452    4432 out.go:304] Setting ErrFile to fd 1584...
I0229 17:56:19.236452    4432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:56:19.255569    4432 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:56:19.256253    4432 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:56:19.279489    4432 cli_runner.go:164] Run: docker container inspect functional-686300 --format={{.State.Status}}
I0229 17:56:19.498048    4432 ssh_runner.go:195] Run: systemctl --version
I0229 17:56:19.508030    4432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
I0229 17:56:19.721824    4432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57190 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-686300\id_rsa Username:docker}
I0229 17:56:19.926322    4432 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image ls --format yaml --alsologtostderr: (1.1382569s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-686300 image ls --format yaml --alsologtostderr:
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 7f3cbf89ad42c57f6bca09b606754b5536721a43695b14eeb72bbe6c6943ec44
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-686300
size: "30"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-686300
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-686300 image ls --format yaml --alsologtostderr:
W0229 17:56:17.945881    5928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 17:56:18.060733    5928 out.go:291] Setting OutFile to fd 1392 ...
I0229 17:56:18.062157    5928 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:56:18.062157    5928 out.go:304] Setting ErrFile to fd 1116...
I0229 17:56:18.062222    5928 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:56:18.087924    5928 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:56:18.088450    5928 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:56:18.105743    5928 cli_runner.go:164] Run: docker container inspect functional-686300 --format={{.State.Status}}
I0229 17:56:18.365699    5928 ssh_runner.go:195] Run: systemctl --version
I0229 17:56:18.379904    5928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
I0229 17:56:18.620751    5928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57190 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-686300\id_rsa Username:docker}
I0229 17:56:18.818512    5928 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-686300 ssh pgrep buildkitd: exit status 1 (1.3744058s)

                                                
                                                
** stderr ** 
	W0229 17:56:19.102366    6044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image build -t localhost/my-image:functional-686300 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image build -t localhost/my-image:functional-686300 testdata\build --alsologtostderr: (6.6111647s)
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-686300 image build -t localhost/my-image:functional-686300 testdata\build --alsologtostderr:
W0229 17:56:20.474165    8828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 17:56:20.585702    8828 out.go:291] Setting OutFile to fd 1384 ...
I0229 17:56:20.599210    8828 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:56:20.599253    8828 out.go:304] Setting ErrFile to fd 1676...
I0229 17:56:20.599253    8828 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:56:20.617431    8828 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:56:20.636420    8828 config.go:182] Loaded profile config "functional-686300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:56:20.655217    8828 cli_runner.go:164] Run: docker container inspect functional-686300 --format={{.State.Status}}
I0229 17:56:20.898268    8828 ssh_runner.go:195] Run: systemctl --version
I0229 17:56:20.903666    8828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-686300
I0229 17:56:21.131611    8828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57190 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\functional-686300\id_rsa Username:docker}
I0229 17:56:21.280129    8828 build_images.go:151] Building image from path: C:\Users\jenkins.minikube7\AppData\Local\Temp\build.3687957695.tar
I0229 17:56:21.294097    8828 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0229 17:56:21.325174    8828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3687957695.tar
I0229 17:56:21.338062    8828 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3687957695.tar: stat -c "%s %y" /var/lib/minikube/build/build.3687957695.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3687957695.tar': No such file or directory
I0229 17:56:21.338746    8828 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\AppData\Local\Temp\build.3687957695.tar --> /var/lib/minikube/build/build.3687957695.tar (3072 bytes)
I0229 17:56:21.390824    8828 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3687957695
I0229 17:56:21.427209    8828 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3687957695 -xf /var/lib/minikube/build/build.3687957695.tar
I0229 17:56:21.447851    8828 docker.go:360] Building image: /var/lib/minikube/build/build.3687957695
I0229 17:56:21.461904    8828 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-686300 /var/lib/minikube/build/build.3687957695
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B done
#1 DONE 0.3s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 2.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.3s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 writing image sha256:20442e68db48ee7231431575f4b7536e379ac928c5c0513acef9fdecfab1e4a8
#8 writing image sha256:20442e68db48ee7231431575f4b7536e379ac928c5c0513acef9fdecfab1e4a8 done
#8 naming to localhost/my-image:functional-686300 0.0s done
#8 DONE 0.2s
I0229 17:56:26.734148    8828 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-686300 /var/lib/minikube/build/build.3687957695: (5.2722055s)
I0229 17:56:26.750201    8828 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3687957695
I0229 17:56:26.842629    8828 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3687957695.tar
I0229 17:56:26.917884    8828 build_images.go:207] Built localhost/my-image:functional-686300 from C:\Users\jenkins.minikube7\AppData\Local\Temp\build.3687957695.tar
I0229 17:56:26.917884    8828 build_images.go:123] succeeded building to: functional-686300
I0229 17:56:26.917884    8828 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image ls: (1.0077897s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.6760836s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-686300
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (12.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image load --daemon gcr.io/google-containers/addon-resizer:functional-686300 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image load --daemon gcr.io/google-containers/addon-resizer:functional-686300 --alsologtostderr: (11.4181519s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image ls: (1.3005259s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (12.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-686300 service hello-node --url --format={{.IP}}: exit status 1 (15.0288257s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:55:22.304816    4648 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image load --daemon gcr.io/google-containers/addon-resizer:functional-686300 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image load --daemon gcr.io/google-containers/addon-resizer:functional-686300 --alsologtostderr: (5.8190309s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.7752167s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-686300
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image load --daemon gcr.io/google-containers/addon-resizer:functional-686300 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image load --daemon gcr.io/google-containers/addon-resizer:functional-686300 --alsologtostderr: (10.5922809s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image ls: (1.0020535s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-686300 service hello-node --url: exit status 1 (15.0306136s)

                                                
                                                
-- stdout --
	http://127.0.0.1:57493

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:55:37.274079   12500 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:57493
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (11.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-686300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-686300"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-686300 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-686300": (6.6671221s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-686300 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-686300 docker-env | Invoke-Expression ; docker images": (4.3127974s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (11.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 update-context --alsologtostderr -v=2: (1.110994s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4370455s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.4697633s)
functional_test.go:1311: Took "1.4697633s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "325.6101ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image save gcr.io/google-containers/addon-resizer:functional-686300 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image save gcr.io/google-containers/addon-resizer:functional-686300 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr: (5.0129836s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.509559s)
functional_test.go:1362: Took "1.5098949s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "764.2385ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image rm gcr.io/google-containers/addon-resizer:functional-686300 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image rm gcr.io/google-containers/addon-resizer:functional-686300 --alsologtostderr: (1.236132s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image ls: (1.5052436s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.3036138s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-686300
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-686300 image save --daemon gcr.io/google-containers/addon-resizer:functional-686300 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-686300 image save --daemon gcr.io/google-containers/addon-resizer:functional-686300 --alsologtostderr: (6.7353597s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-686300
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.14s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.45s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-686300
--- PASS: TestFunctional/delete_addon-resizer_images (0.45s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-686300
--- PASS: TestFunctional/delete_my-image_image (0.17s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-686300
--- PASS: TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (66.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-120000 --driver=docker
E0229 18:01:58.479522    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-120000 --driver=docker: (1m6.0287081s)
--- PASS: TestImageBuild/serial/Setup (66.03s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (3.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-120000
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-120000: (3.8829231s)
--- PASS: TestImageBuild/serial/NormalBuild (3.88s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-120000
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-120000: (2.6503433s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (2.17s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-120000
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-120000: (2.1709401s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (2.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-120000
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-120000: (2.7546142s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-514600 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-514600 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m19.5399641s)
--- PASS: TestJSONOutput/start/Command (79.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-514600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-514600 --output=json --user=testUser: (1.5957331s)
--- PASS: TestJSONOutput/pause/Command (1.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.42s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-514600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-514600 --output=json --user=testUser: (1.4205091s)
--- PASS: TestJSONOutput/unpause/Command (1.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.28s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-514600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-514600 --output=json --user=testUser: (7.2761285s)
--- PASS: TestJSONOutput/stop/Command (7.28s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.35s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-305400 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-305400 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (267.7178ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"167dd717-91c6-48be-8d51-6b393f57b150","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-305400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab1484d5-611e-404a-b320-565963676f3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"e1df962a-d6d7-4dcb-8f06-2900e27c8299","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1fd1196c-0ee1-4cd0-86e6-1949550aed3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"cbb24d03-2b4f-45a1-bc67-dc0526d5cf7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18259"}}
	{"specversion":"1.0","id":"469d3dce-58d0-4cf2-9bdd-daaa398fa785","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b6f2e04a-6441-4ee7-87f8-a19582427afc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:13:55.890629   10564 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-305400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-305400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-305400: (1.0788692s)
--- PASS: TestErrorJSONOutput (1.35s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (76.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-627700 --network=
E0229 18:14:41.115245    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-627700 --network=: (1m11.7795873s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-627700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-627700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-627700: (4.7627519s)
--- PASS: TestKicCustomNetwork/create_custom_network (76.71s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (75.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-122000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-122000 --network=bridge: (1m10.8231057s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-122000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-122000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-122000: (4.3200166s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (75.32s)

                                                
                                    
x
+
TestKicExistingNetwork (76.78s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-793400 --network=existing-network
E0229 18:16:58.490350    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-793400 --network=existing-network: (1m11.3485228s)
helpers_test.go:175: Cleaning up "existing-network-793400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-793400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-793400: (4.3203912s)
--- PASS: TestKicExistingNetwork (76.78s)

                                                
                                    
x
+
TestKicCustomSubnet (75.53s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-282900 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-282900 --subnet=192.168.60.0/24: (1m10.9735372s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-282900 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-282900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-282900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-282900: (4.3840396s)
--- PASS: TestKicCustomSubnet (75.53s)

                                                
                                    
x
+
TestKicStaticIP (78.59s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-675200 --static-ip=192.168.200.200
E0229 18:19:41.121962    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-675200 --static-ip=192.168.200.200: (1m12.9152861s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-675200 ip
helpers_test.go:175: Cleaning up "static-ip-675200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-675200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-675200: (5.0596798s)
--- PASS: TestKicStaticIP (78.59s)

                                                
                                    
x
+
TestMainNoArgs (0.22s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.22s)

                                                
                                    
x
+
TestMinikubeProfile (141.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-561300 --driver=docker
E0229 18:21:04.319361    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-561300 --driver=docker: (1m5.3138379s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-561300 --driver=docker
E0229 18:21:58.489349    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-561300 --driver=docker: (1m0.3902069s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-561300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.9961851s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-561300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.991978s)
helpers_test.go:175: Cleaning up "second-561300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-561300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-561300: (5.2611357s)
helpers_test.go:175: Cleaning up "first-561300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-561300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-561300: (5.5667874s)
--- PASS: TestMinikubeProfile (141.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-281400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-281400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (18.0217166s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (1.08s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-281400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-281400 ssh -- ls /minikube-host: (1.0739199s)
--- PASS: TestMountStart/serial/VerifyMountFirst (1.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (18.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-281400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-281400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (17.3382164s)
--- PASS: TestMountStart/serial/StartWithMountSecond (18.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (1.05s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-281400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-281400 ssh -- ls /minikube-host: (1.0539424s)
--- PASS: TestMountStart/serial/VerifyMountSecond (1.05s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (3.84s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-281400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-281400 --alsologtostderr -v=5: (3.840082s)
--- PASS: TestMountStart/serial/DeleteFirst (3.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (1.08s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-281400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-281400 ssh -- ls /minikube-host: (1.0775812s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (1.08s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.45s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-281400
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-281400: (2.449446s)
--- PASS: TestMountStart/serial/Stop (2.45s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (13s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-281400
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-281400: (11.9870869s)
--- PASS: TestMountStart/serial/RestartStopped (13.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (1.05s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-281400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-281400 ssh -- ls /minikube-host: (1.0460241s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (146.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-678200 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0229 18:24:41.128018    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 18:25:01.712276    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-678200 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (2m23.8372407s)
multinode_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 status --alsologtostderr: (2.2768993s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (146.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (26.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- rollout status deployment/busybox: (18.9780418s)
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- exec busybox-5b5d89c9d6-89jqg -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- exec busybox-5b5d89c9d6-89jqg -- nslookup kubernetes.io: (1.9033542s)
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- exec busybox-5b5d89c9d6-lvrdk -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- exec busybox-5b5d89c9d6-lvrdk -- nslookup kubernetes.io: (1.5674946s)
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- exec busybox-5b5d89c9d6-89jqg -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- exec busybox-5b5d89c9d6-lvrdk -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- exec busybox-5b5d89c9d6-89jqg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- exec busybox-5b5d89c9d6-lvrdk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (26.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- exec busybox-5b5d89c9d6-89jqg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- exec busybox-5b5d89c9d6-89jqg -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- exec busybox-5b5d89c9d6-lvrdk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-678200 -- exec busybox-5b5d89c9d6-lvrdk -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-678200 -v 3 --alsologtostderr
E0229 18:26:58.501427    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
multinode_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-678200 -v 3 --alsologtostderr: (49.8885245s)
multinode_test.go:117: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 status --alsologtostderr: (2.8786591s)
--- PASS: TestMultiNode/serial/AddNode (52.77s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-678200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.2456974s)
--- PASS: TestMultiNode/serial/ProfileList (1.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (39.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 status --output json --alsologtostderr: (2.7000106s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 cp testdata\cp-test.txt multinode-678200:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 cp testdata\cp-test.txt multinode-678200:/home/docker/cp-test.txt: (1.1083092s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200 "sudo cat /home/docker/cp-test.txt": (1.2003863s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3295504677\001\cp-test_multinode-678200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3295504677\001\cp-test_multinode-678200.txt: (1.1197307s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200 "sudo cat /home/docker/cp-test.txt": (1.1170886s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200:/home/docker/cp-test.txt multinode-678200-m02:/home/docker/cp-test_multinode-678200_multinode-678200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200:/home/docker/cp-test.txt multinode-678200-m02:/home/docker/cp-test_multinode-678200_multinode-678200-m02.txt: (1.5827789s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200 "sudo cat /home/docker/cp-test.txt": (1.0802182s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m02 "sudo cat /home/docker/cp-test_multinode-678200_multinode-678200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m02 "sudo cat /home/docker/cp-test_multinode-678200_multinode-678200-m02.txt": (1.0968177s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200:/home/docker/cp-test.txt multinode-678200-m03:/home/docker/cp-test_multinode-678200_multinode-678200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200:/home/docker/cp-test.txt multinode-678200-m03:/home/docker/cp-test_multinode-678200_multinode-678200-m03.txt: (1.6479985s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200 "sudo cat /home/docker/cp-test.txt": (1.0943827s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m03 "sudo cat /home/docker/cp-test_multinode-678200_multinode-678200-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m03 "sudo cat /home/docker/cp-test_multinode-678200_multinode-678200-m03.txt": (1.1072968s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 cp testdata\cp-test.txt multinode-678200-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 cp testdata\cp-test.txt multinode-678200-m02:/home/docker/cp-test.txt: (1.1479954s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m02 "sudo cat /home/docker/cp-test.txt": (1.1025228s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3295504677\001\cp-test_multinode-678200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3295504677\001\cp-test_multinode-678200-m02.txt: (1.1154931s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m02 "sudo cat /home/docker/cp-test.txt": (1.1021868s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200-m02:/home/docker/cp-test.txt multinode-678200:/home/docker/cp-test_multinode-678200-m02_multinode-678200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200-m02:/home/docker/cp-test.txt multinode-678200:/home/docker/cp-test_multinode-678200-m02_multinode-678200.txt: (1.6375723s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m02 "sudo cat /home/docker/cp-test.txt": (1.1124466s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200 "sudo cat /home/docker/cp-test_multinode-678200-m02_multinode-678200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200 "sudo cat /home/docker/cp-test_multinode-678200-m02_multinode-678200.txt": (1.0988969s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200-m02:/home/docker/cp-test.txt multinode-678200-m03:/home/docker/cp-test_multinode-678200-m02_multinode-678200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200-m02:/home/docker/cp-test.txt multinode-678200-m03:/home/docker/cp-test_multinode-678200-m02_multinode-678200-m03.txt: (1.5864728s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m02 "sudo cat /home/docker/cp-test.txt": (1.1532677s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m03 "sudo cat /home/docker/cp-test_multinode-678200-m02_multinode-678200-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m03 "sudo cat /home/docker/cp-test_multinode-678200-m02_multinode-678200-m03.txt": (1.0886612s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 cp testdata\cp-test.txt multinode-678200-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 cp testdata\cp-test.txt multinode-678200-m03:/home/docker/cp-test.txt: (1.1353526s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m03 "sudo cat /home/docker/cp-test.txt": (1.1108846s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3295504677\001\cp-test_multinode-678200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3295504677\001\cp-test_multinode-678200-m03.txt: (1.0968034s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m03 "sudo cat /home/docker/cp-test.txt": (1.1116048s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200-m03:/home/docker/cp-test.txt multinode-678200:/home/docker/cp-test_multinode-678200-m03_multinode-678200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200-m03:/home/docker/cp-test.txt multinode-678200:/home/docker/cp-test_multinode-678200-m03_multinode-678200.txt: (1.6138896s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m03 "sudo cat /home/docker/cp-test.txt": (1.0648456s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200 "sudo cat /home/docker/cp-test_multinode-678200-m03_multinode-678200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200 "sudo cat /home/docker/cp-test_multinode-678200-m03_multinode-678200.txt": (1.1105951s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200-m03:/home/docker/cp-test.txt multinode-678200-m02:/home/docker/cp-test_multinode-678200-m03_multinode-678200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 cp multinode-678200-m03:/home/docker/cp-test.txt multinode-678200-m02:/home/docker/cp-test_multinode-678200-m03_multinode-678200-m02.txt: (1.6539244s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m03 "sudo cat /home/docker/cp-test.txt": (1.1472113s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m02 "sudo cat /home/docker/cp-test_multinode-678200-m03_multinode-678200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 ssh -n multinode-678200-m02 "sudo cat /home/docker/cp-test_multinode-678200-m03_multinode-678200-m02.txt": (1.1108789s)
--- PASS: TestMultiNode/serial/CopyFile (39.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (6.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 node stop m03: (2.1640569s)
multinode_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-678200 status: exit status 7 (2.1362203s)

                                                
                                                
-- stdout --
	multinode-678200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-678200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-678200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:28:17.790786    9996 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-678200 status --alsologtostderr: exit status 7 (2.1394047s)

                                                
                                                
-- stdout --
	multinode-678200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-678200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-678200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:28:19.929995    8004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 18:28:20.010304    8004 out.go:291] Setting OutFile to fd 1272 ...
	I0229 18:28:20.011336    8004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:28:20.011336    8004 out.go:304] Setting ErrFile to fd 1436...
	I0229 18:28:20.011336    8004 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:28:20.026846    8004 out.go:298] Setting JSON to false
	I0229 18:28:20.026846    8004 mustload.go:65] Loading cluster: multinode-678200
	I0229 18:28:20.026846    8004 notify.go:220] Checking for updates...
	I0229 18:28:20.028306    8004 config.go:182] Loaded profile config "multinode-678200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:28:20.028382    8004 status.go:255] checking status of multinode-678200 ...
	I0229 18:28:20.048763    8004 cli_runner.go:164] Run: docker container inspect multinode-678200 --format={{.State.Status}}
	I0229 18:28:20.212816    8004 status.go:330] multinode-678200 host status = "Running" (err=<nil>)
	I0229 18:28:20.213606    8004 host.go:66] Checking if "multinode-678200" exists ...
	I0229 18:28:20.222935    8004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-678200
	I0229 18:28:20.387883    8004 host.go:66] Checking if "multinode-678200" exists ...
	I0229 18:28:20.401089    8004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 18:28:20.407984    8004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-678200
	I0229 18:28:20.575431    8004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58126 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-678200\id_rsa Username:docker}
	I0229 18:28:20.708071    8004 ssh_runner.go:195] Run: systemctl --version
	I0229 18:28:20.733457    8004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:28:20.764706    8004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-678200
	I0229 18:28:20.930872    8004 kubeconfig.go:92] found "multinode-678200" server: "https://127.0.0.1:58125"
	I0229 18:28:20.930946    8004 api_server.go:166] Checking apiserver status ...
	I0229 18:28:20.945628    8004 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:28:20.978426    8004 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2446/cgroup
	I0229 18:28:20.998460    8004 api_server.go:182] apiserver freezer: "21:freezer:/docker/fd3e9b232bd027272f92a991998ec9ab4b62abee68f0363aa5c1ffccc6c15975/kubepods/burstable/pod18c391ab6e21988746462bba4b09d21a/07f39d81b4e2fd8c3cb694c27783b12cd3fc0a33f9492234e83beb5986c13705"
	I0229 18:28:21.009740    8004 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fd3e9b232bd027272f92a991998ec9ab4b62abee68f0363aa5c1ffccc6c15975/kubepods/burstable/pod18c391ab6e21988746462bba4b09d21a/07f39d81b4e2fd8c3cb694c27783b12cd3fc0a33f9492234e83beb5986c13705/freezer.state
	I0229 18:28:21.029797    8004 api_server.go:204] freezer state: "THAWED"
	I0229 18:28:21.029903    8004 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58125/healthz ...
	I0229 18:28:21.045813    8004 api_server.go:279] https://127.0.0.1:58125/healthz returned 200:
	ok
	I0229 18:28:21.045813    8004 status.go:421] multinode-678200 apiserver status = Running (err=<nil>)
	I0229 18:28:21.045813    8004 status.go:257] multinode-678200 status: &{Name:multinode-678200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 18:28:21.045813    8004 status.go:255] checking status of multinode-678200-m02 ...
	I0229 18:28:21.060833    8004 cli_runner.go:164] Run: docker container inspect multinode-678200-m02 --format={{.State.Status}}
	I0229 18:28:21.225193    8004 status.go:330] multinode-678200-m02 host status = "Running" (err=<nil>)
	I0229 18:28:21.225193    8004 host.go:66] Checking if "multinode-678200-m02" exists ...
	I0229 18:28:21.234674    8004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-678200-m02
	I0229 18:28:21.397020    8004 host.go:66] Checking if "multinode-678200-m02" exists ...
	I0229 18:28:21.410778    8004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 18:28:21.418052    8004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-678200-m02
	I0229 18:28:21.583935    8004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58170 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-678200-m02\id_rsa Username:docker}
	I0229 18:28:21.727072    8004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:28:21.750974    8004 status.go:257] multinode-678200-m02 status: &{Name:multinode-678200-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0229 18:28:21.751072    8004 status.go:255] checking status of multinode-678200-m03 ...
	I0229 18:28:21.768172    8004 cli_runner.go:164] Run: docker container inspect multinode-678200-m03 --format={{.State.Status}}
	I0229 18:28:21.926707    8004 status.go:330] multinode-678200-m03 host status = "Stopped" (err=<nil>)
	I0229 18:28:21.926707    8004 status.go:343] host is not running, skipping remaining checks
	I0229 18:28:21.926707    8004 status.go:257] multinode-678200-m03 status: &{Name:multinode-678200-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (6.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (23.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 node start m03 --alsologtostderr: (20.6533879s)
multinode_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 status
multinode_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 status: (2.7425342s)
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (23.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (142.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-678200
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-678200
multinode_test.go:318: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-678200: (25.707189s)
multinode_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-678200 --wait=true -v=8 --alsologtostderr
E0229 18:29:41.125694    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
multinode_test.go:323: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-678200 --wait=true -v=8 --alsologtostderr: (1m56.1339547s)
multinode_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-678200
--- PASS: TestMultiNode/serial/RestartKeepsNodes (142.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (11.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 node delete m03: (8.4905899s)
multinode_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 status --alsologtostderr
multinode_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 status --alsologtostderr: (2.0498052s)
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (11.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 stop
multinode_test.go:342: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 stop: (24.1091726s)
multinode_test.go:348: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-678200 status: exit status 7 (564.1524ms)

                                                
                                                
-- stdout --
	multinode-678200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-678200-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:31:43.453841    3044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-678200 status --alsologtostderr: exit status 7 (588.4711ms)

                                                
                                                
-- stdout --
	multinode-678200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-678200-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:31:44.032544   10296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 18:31:44.120207   10296 out.go:291] Setting OutFile to fd 1300 ...
	I0229 18:31:44.120543   10296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:31:44.120543   10296 out.go:304] Setting ErrFile to fd 1396...
	I0229 18:31:44.120543   10296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:31:44.132774   10296 out.go:298] Setting JSON to false
	I0229 18:31:44.132774   10296 mustload.go:65] Loading cluster: multinode-678200
	I0229 18:31:44.132774   10296 notify.go:220] Checking for updates...
	I0229 18:31:44.133433   10296 config.go:182] Loaded profile config "multinode-678200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:31:44.133433   10296 status.go:255] checking status of multinode-678200 ...
	I0229 18:31:44.150098   10296 cli_runner.go:164] Run: docker container inspect multinode-678200 --format={{.State.Status}}
	I0229 18:31:44.319606   10296 status.go:330] multinode-678200 host status = "Stopped" (err=<nil>)
	I0229 18:31:44.319606   10296 status.go:343] host is not running, skipping remaining checks
	I0229 18:31:44.319606   10296 status.go:257] multinode-678200 status: &{Name:multinode-678200 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 18:31:44.319606   10296 status.go:255] checking status of multinode-678200-m02 ...
	I0229 18:31:44.340611   10296 cli_runner.go:164] Run: docker container inspect multinode-678200-m02 --format={{.State.Status}}
	I0229 18:31:44.491525   10296 status.go:330] multinode-678200-m02 host status = "Stopped" (err=<nil>)
	I0229 18:31:44.491723   10296 status.go:343] host is not running, skipping remaining checks
	I0229 18:31:44.491723   10296 status.go:257] multinode-678200-m02 status: &{Name:multinode-678200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (105.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-678200 --wait=true -v=8 --alsologtostderr --driver=docker
E0229 18:31:58.504953    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-678200 --wait=true -v=8 --alsologtostderr --driver=docker: (1m43.0203429s)
multinode_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-678200 status --alsologtostderr
multinode_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-678200 status --alsologtostderr: (2.0068974s)
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (105.70s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (71.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-678200
multinode_test.go:480: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-678200-m02 --driver=docker
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-678200-m02 --driver=docker: exit status 14 (264.7376ms)

                                                
                                                
-- stdout --
	* [multinode-678200-m02] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:33:30.551241    5600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Profile name 'multinode-678200-m02' is duplicated with machine name 'multinode-678200-m02' in profile 'multinode-678200'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-678200-m03 --driver=docker
multinode_test.go:488: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-678200-m03 --driver=docker: (1m4.7596655s)
multinode_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-678200
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-678200: exit status 80 (1.1562329s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-678200
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:34:35.585820   12540 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-678200-m03 already exists in multinode-678200-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_node_f30df829a49c27e09829ed66f8254940e71c1eac_14.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-678200-m03
E0229 18:34:41.128914    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
multinode_test.go:500: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-678200-m03: (5.3673638s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (71.78s)

                                                
                                    
x
+
TestPreload (202.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-294500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-294500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (2m2.5607011s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-294500 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-294500 image pull gcr.io/k8s-minikube/busybox: (2.0249156s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-294500
E0229 18:36:58.498005    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-294500: (12.4960849s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-294500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
E0229 18:37:44.331997    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-294500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (59.0102258s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-294500 image list
helpers_test.go:175: Cleaning up "test-preload-294500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-294500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-294500: (5.1916348s)
--- PASS: TestPreload (202.06s)

                                                
                                    
x
+
TestScheduledStopWindows (136.26s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-024800 --memory=2048 --driver=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-024800 --memory=2048 --driver=docker: (1m6.3599119s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-024800 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-024800 --schedule 5m: (1.3166956s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-024800 -n scheduled-stop-024800
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-024800 -n scheduled-stop-024800: (1.2461947s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-024800 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-024800 -- sudo systemctl show minikube-scheduled-stop --no-page: (1.1342275s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-024800 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-024800 --schedule 5s: (1.3463433s)
E0229 18:39:41.136794    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-024800
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-024800: exit status 7 (403.7492ms)

                                                
                                                
-- stdout --
	scheduled-stop-024800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:40:25.545361    7396 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-024800 -n scheduled-stop-024800
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-024800 -n scheduled-stop-024800: exit status 7 (400.6268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:40:25.942367    9584 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-024800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-024800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-024800: (4.0432352s)
--- PASS: TestScheduledStopWindows (136.26s)

                                                
                                    
x
+
TestInsufficientStorage (49.05s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-942600 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-942600 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (42.5016671s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aecf4d14-3b6c-411d-beee-a0316b3d15b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-942600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f85cc80-42ec-4015-b815-5fac6c99ab7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"68368ff1-bb3d-42e3-bfdf-7680b1f41735","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b1ed8aba-8d03-426e-908b-9d11b8d1f6fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"59819af4-f4e3-4386-9b2a-7b66184f5476","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18259"}}
	{"specversion":"1.0","id":"d54873b3-141a-4d96-a38d-475084aa13ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ee72d794-550f-43da-b455-cd05888c36fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c4c121e7-8b51-44e4-8086-74622a110263","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2e7f8d90-afd6-4a90-8293-661048394c6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"775e4eb3-0d09-495e-ab39-105013c0c6ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"a420e38c-8aa7-4b4b-9566-a3ebee94e3f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-942600 in cluster insufficient-storage-942600","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"211cbb5a-43d7-43e1-b6d4-d396c5b3cd84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708944392-18244 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a20bf6c1-6080-4199-a2d7-cc5530b1228e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6f6ea0c0-af4a-4d64-a8dc-21a24703f9dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:40:30.392578    4832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-942600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-942600 --output=json --layout=cluster: exit status 7 (1.1470362s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-942600","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-942600","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:41:12.908963    3832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 18:41:13.878175    3832 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-942600" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-942600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-942600 --output=json --layout=cluster: exit status 7 (1.1513846s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-942600","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-942600","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:41:14.054380    7672 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 18:41:15.032085    7672 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-942600" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	E0229 18:41:15.066166    7672 status.go:559] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\insufficient-storage-942600\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-942600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-942600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-942600: (4.2423463s)
--- PASS: TestInsufficientStorage (49.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (376.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.13622871.exe start -p running-upgrade-130400 --memory=2200 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.13622871.exe start -p running-upgrade-130400 --memory=2200 --vm-driver=docker: (4m2.2727983s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-130400 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-130400 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m6.1957276s)
helpers_test.go:175: Cleaning up "running-upgrade-130400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-130400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-130400: (7.0715412s)
--- PASS: TestRunningBinaryUpgrade (376.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (308.23s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3055433074.exe start -p missing-upgrade-251000 --memory=2200 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3055433074.exe start -p missing-upgrade-251000 --memory=2200 --driver=docker: (2m2.6934304s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-251000
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-251000: (15.9213831s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-251000
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-251000 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-251000 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m42.6439677s)
helpers_test.go:175: Cleaning up "missing-upgrade-251000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-251000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-251000: (5.6642036s)
--- PASS: TestMissingContainerUpgrade (308.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-130400 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-130400 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (321.5248ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-130400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:41:19.454650    3584 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (120.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-130400 --driver=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-130400 --driver=docker: (1m58.6113514s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-130400 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-130400 status -o json: (1.472936s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (120.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (361.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3845240787.exe start -p stopped-upgrade-130400 --memory=2200 --vm-driver=docker
E0229 18:41:41.730245    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 18:41:58.500807    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3845240787.exe start -p stopped-upgrade-130400 --memory=2200 --vm-driver=docker: (4m1.6375527s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3845240787.exe -p stopped-upgrade-130400 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.26.0.3845240787.exe -p stopped-upgrade-130400 stop: (13.8459774s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-130400 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-130400 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m45.6943226s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (361.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (69.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-130400 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-130400 --no-kubernetes --driver=docker: (1m1.6955545s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-130400 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-130400 status -o json: exit status 2 (1.7263661s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-130400","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:44:21.616115   11064 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-130400
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-130400: (6.1197232s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (69.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (34.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-130400 --no-kubernetes --driver=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-130400 --no-kubernetes --driver=docker: (34.1311164s)
--- PASS: TestNoKubernetes/serial/Start (34.13s)

                                                
                                    
x
+
TestPause/serial/Start (127.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-465700 --memory=2048 --install-addons=false --wait=all --driver=docker
E0229 18:44:41.126412    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-465700 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m7.8003853s)
--- PASS: TestPause/serial/Start (127.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-130400 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-130400 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.6632342s)

                                                
                                                
** stderr ** 
	W0229 18:45:03.574214    5752 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (14.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (9.0475628s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (5.1697528s)
--- PASS: TestNoKubernetes/serial/ProfileList (14.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-130400
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-130400: (2.985103s)
--- PASS: TestNoKubernetes/serial/Stop (2.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (16.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-130400 --driver=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-130400 --driver=docker: (16.7237627s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (16.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-130400 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-130400 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.4430733s)

                                                
                                                
** stderr ** 
	W0229 18:45:39.160750    9392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-465700 --alsologtostderr -v=1 --driver=docker
E0229 18:46:58.512872    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-465700 --alsologtostderr -v=1 --driver=docker: (51.841676s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (51.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-130400
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-130400: (3.5518681s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.55s)

                                                
                                    
x
+
TestPause/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-465700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-465700 --alsologtostderr -v=5: (2.6993614s)
--- PASS: TestPause/serial/Pause (2.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (1.97s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-465700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-465700 --output=json --layout=cluster: exit status 2 (1.966562s)

                                                
                                                
-- stdout --
	{"Name":"pause-465700","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-465700","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:47:35.711973    1000 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (1.97s)

                                                
                                    
x
+
TestPause/serial/Unpause (2.1s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-465700 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-465700 --alsologtostderr -v=5: (2.1006399s)
--- PASS: TestPause/serial/Unpause (2.10s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (3.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-465700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-465700 --alsologtostderr -v=5: (3.8251428s)
--- PASS: TestPause/serial/PauseAgain (3.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (10s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-465700 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-465700 --alsologtostderr -v=5: (10.0007615s)
--- PASS: TestPause/serial/DeletePaused (10.00s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.69s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (4.0223632s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-465700
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-465700: exit status 1 (187.8605ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-465700: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (4.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (121.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-500400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-500400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.29.0-rc.2: (2m1.2795498s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (121.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-500400 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ba7e2a9f-5d5e-46bb-9361-ff2bd4b7d150] Pending
helpers_test.go:344: "busybox" [ba7e2a9f-5d5e-46bb-9361-ff2bd4b7d150] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0229 18:54:24.354015    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
helpers_test.go:344: "busybox" [ba7e2a9f-5d5e-46bb-9361-ff2bd4b7d150] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.0178367s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-500400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-500400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-500400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.095093s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-500400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-500400 --alsologtostderr -v=3
E0229 18:54:41.130263    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-500400 --alsologtostderr -v=3: (12.5769325s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-500400 -n no-preload-500400
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-500400 -n no-preload-500400: exit status 7 (494.866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:54:44.885831    1928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-500400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (363.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-500400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-500400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.29.0-rc.2: (6m1.4685112s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-500400 -n no-preload-500400
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-500400 -n no-preload-500400: (1.9098502s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (363.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-058300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-058300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.28.4: (1m24.4719156s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-058300 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f146b10e-15a6-44e6-a8f5-6e855321f5a8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f146b10e-15a6-44e6-a8f5-6e855321f5a8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.0168959s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-058300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-058300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-058300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.2535871s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-058300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-058300 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-058300 --alsologtostderr -v=3: (12.2722005s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-058300 -n embed-certs-058300
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-058300 -n embed-certs-058300: exit status 7 (426.3077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:56:42.027766   12412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-058300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (357.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-058300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.28.4
E0229 18:56:58.511665    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-058300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.28.4: (5m55.6433872s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-058300 -n embed-certs-058300
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-058300 -n embed-certs-058300: (2.3382892s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (357.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-653000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.28.4
E0229 18:59:41.157894    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-653000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.28.4: (1m29.0327287s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (27.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9lx5s" [46dc8c8d-d811-4d69-9333-1d306494371b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9lx5s" [46dc8c8d-d811-4d69-9333-1d306494371b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 27.0195899s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (27.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-653000 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [30f782c6-686d-477e-9e2d-3daec40d517c] Pending
helpers_test.go:344: "busybox" [30f782c6-686d-477e-9e2d-3daec40d517c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [30f782c6-686d-477e-9e2d-3daec40d517c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.0214996s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-653000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-653000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-653000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.2279144s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-653000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-653000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-653000 --alsologtostderr -v=3: (13.5064367s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9lx5s" [46dc8c8d-d811-4d69-9333-1d306494371b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.3855236s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-500400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-500400 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (10.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-500400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-500400 --alsologtostderr -v=1: (1.8483206s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-500400 -n no-preload-500400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-500400 -n no-preload-500400: exit status 2 (1.3441503s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:01:25.140653    9316 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-500400 -n no-preload-500400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-500400 -n no-preload-500400: exit status 2 (1.491442s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:01:26.500035    8824 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-500400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-500400 --alsologtostderr -v=1: (1.9639166s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-500400 -n no-preload-500400
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-500400 -n no-preload-500400: (2.0864335s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-500400 -n no-preload-500400
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-500400 -n no-preload-500400: (2.0576227s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (10.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-653000 -n default-k8s-diff-port-653000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-653000 -n default-k8s-diff-port-653000: exit status 7 (548.1181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:01:28.353139   11976 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-653000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (361.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-653000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-653000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.28.4: (5m59.5624271s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-653000 -n default-k8s-diff-port-653000
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-653000 -n default-k8s-diff-port-653000: (1.7295835s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (361.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (83.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-705400 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.29.0-rc.2
E0229 19:01:58.525025    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-705400 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.29.0-rc.2: (1m23.2691534s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (83.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (44.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wwjx2" [384c2501-30b8-4f16-bd9e-8eb5232ce076] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wwjx2" [384c2501-30b8-4f16-bd9e-8eb5232ce076] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 44.5117431s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (44.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-718400 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-718400 --alsologtostderr -v=3: (4.4284846s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-718400 -n old-k8s-version-718400: exit status 7 (595.398ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:02:46.117095    5688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-718400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-705400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-705400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.2749548s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-705400 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-705400 --alsologtostderr -v=3: (9.7338659s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-705400 -n newest-cni-705400
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-705400 -n newest-cni-705400: exit status 7 (505.7591ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:03:18.901488    2252 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-705400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (61.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-705400 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-705400 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.29.0-rc.2: (1m0.2291849s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-705400 -n newest-cni-705400
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-705400 -n newest-cni-705400: (1.3435648s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (61.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wwjx2" [384c2501-30b8-4f16-bd9e-8eb5232ce076] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0166528s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-058300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-058300 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-058300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-058300 --alsologtostderr -v=1: (1.899713s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-058300 -n embed-certs-058300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-058300 -n embed-certs-058300: exit status 2 (1.3552101s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:03:34.934438    4196 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-058300 -n embed-certs-058300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-058300 -n embed-certs-058300: exit status 2 (1.3581832s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:03:36.277856   14348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-058300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-058300 --alsologtostderr -v=1: (1.8349405s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-058300 -n embed-certs-058300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-058300 -n embed-certs-058300: (1.5540602s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-058300 -n embed-certs-058300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-058300 -n embed-certs-058300: (1.4483968s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (9.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m40.6636541s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-705400 image list --format=json
E0229 19:04:21.521629    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
E0229 19:04:21.536043    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
E0229 19:04:21.551260    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
E0229 19:04:21.583258    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
E0229 19:04:21.631962    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
E0229 19:04:21.723639    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
E0229 19:04:21.893335    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
E0229 19:04:22.221339    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-705400 --alsologtostderr -v=1
E0229 19:04:22.862748    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
E0229 19:04:24.154637    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-705400 --alsologtostderr -v=1: (1.8370706s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-705400 -n newest-cni-705400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-705400 -n newest-cni-705400: exit status 2 (1.3464229s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:04:24.319005    6216 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-705400 -n newest-cni-705400
E0229 19:04:26.716110    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-705400 -n newest-cni-705400: exit status 2 (1.3434898s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:04:25.680877   12136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-705400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-705400 --alsologtostderr -v=1: (2.4614809s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-705400 -n newest-cni-705400
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-705400 -n newest-cni-705400: (2.0180176s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-705400 -n newest-cni-705400
E0229 19:04:31.842953    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-705400 -n newest-cni-705400: (1.5157321s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (10.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (103.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
E0229 19:04:41.145702    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 19:04:42.095885    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
E0229 19:05:02.590993    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m43.8944072s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (103.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-652900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-652900 "pgrep -a kubelet": (1.1496985s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (16.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-652900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t5j84" [f40f444f-a574-464f-a9ed-8661113cb2c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t5j84" [f40f444f-a574-464f-a9ed-8661113cb2c5] Running
E0229 19:05:43.554300    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 16.0270775s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (16.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-652900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-62z6l" [930960ed-7072-44e9-b80f-45b62f91280a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0165473s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (1.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-652900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-652900 "pgrep -a kubelet": (1.2907794s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (1.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (19.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-652900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xllfm" [3306b279-16c7-4599-a57a-95f169e862e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xllfm" [3306b279-16c7-4599-a57a-95f169e862e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 19.012358s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (19.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-652900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (184.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (3m4.0725458s)
--- PASS: TestNetworkPlugins/group/calico/Start (184.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (24.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ljtv8" [80603d4e-1064-4e7e-88a5-4b460fc256fa] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ljtv8" [80603d4e-1064-4e7e-88a5-4b460fc256fa] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 24.0217858s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (24.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ljtv8" [80603d4e-1064-4e7e-88a5-4b460fc256fa] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0384986s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-653000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-653000 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-diff-port-653000 image list --format=json: (1.1541729s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-653000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-653000 --alsologtostderr -v=1: (2.8801604s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-653000 -n default-k8s-diff-port-653000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-653000 -n default-k8s-diff-port-653000: exit status 2 (1.7171698s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:08:05.002896   10152 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-653000 -n default-k8s-diff-port-653000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-653000 -n default-k8s-diff-port-653000: exit status 2 (1.494663s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:08:06.739242    9488 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-653000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-653000 --alsologtostderr -v=1: (2.1208767s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-653000 -n default-k8s-diff-port-653000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-653000 -n default-k8s-diff-port-653000: (2.0623966s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-653000 -n default-k8s-diff-port-653000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-653000 -n default-k8s-diff-port-653000: (1.5876006s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (11.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (118.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m58.0106765s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (118.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (97.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
E0229 19:09:21.523042    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
E0229 19:09:41.144903    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-686300\client.crt: The system cannot find the path specified.
E0229 19:09:49.344139    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-500400\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m37.0973848s)
--- PASS: TestNetworkPlugins/group/false/Start (97.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dddsm" [1be45426-f50a-42dc-b795-5e121a2dff57] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0264989s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-652900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-652900 "pgrep -a kubelet": (1.3362598s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (1.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-652900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-652900 "pgrep -a kubelet": (1.5176743s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (1.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (25.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-652900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n66s5" [49a44599-8e15-4553-8850-a0e02450dde7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-n66s5" [49a44599-8e15-4553-8850-a0e02450dde7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 25.0178552s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (25.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (1.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-652900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-652900 "pgrep -a kubelet": (1.6751268s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (1.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (25.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-652900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2kjl9" [85121f8c-63e1-4bbb-8ee0-bf560779889b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2kjl9" [85121f8c-63e1-4bbb-8ee0-bf560779889b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 25.0116141s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (25.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (23.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-652900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x5j2v" [e7838097-0594-4ef0-88df-cd9f119e41f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x5j2v" [e7838097-0594-4ef0-88df-cd9f119e41f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 23.0205931s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (23.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-652900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-652900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-652900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (107.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
E0229 19:11:45.098447    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-652900\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m47.14283s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (107.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (113.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E0229 19:11:55.542107    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-652900\client.crt: The system cannot find the path specified.
E0229 19:11:58.519380    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-850800\client.crt: The system cannot find the path specified.
E0229 19:12:05.599006    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-652900\client.crt: The system cannot find the path specified.
E0229 19:12:23.602970    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-653000\client.crt: The system cannot find the path specified.
E0229 19:12:46.564611    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-652900\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m53.5805264s)
--- PASS: TestNetworkPlugins/group/bridge/Start (113.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (130.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
E0229 19:13:17.466825    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-652900\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-652900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (2m10.8050805s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (130.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-652900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-652900 "pgrep -a kubelet": (1.1829062s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (21.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-652900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hzslv" [243da435-0348-434f-b288-62c129d5da5a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hzslv" [243da435-0348-434f-b288-62c129d5da5a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 21.0219679s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (21.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (1.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-652900 "pgrep -a kubelet"
E0229 19:13:45.527664    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-diff-port-653000\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-652900 "pgrep -a kubelet": (1.2516733s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (1.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (17.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-652900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k4mfn" [bc39cfa8-e001-4252-a285-af0737881939] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k4mfn" [bc39cfa8-e001-4252-a285-af0737881939] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 17.0196521s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (17.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-652900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-652900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-652900 "pgrep -a kubelet"
E0229 19:15:23.943214    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-652900\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-652900 "pgrep -a kubelet": (1.137647s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (17.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-652900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xhsns" [64e08c84-679b-4fde-adec-b0fa456faf26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 19:15:33.492901    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\auto-652900\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-xhsns" [64e08c84-679b-4fde-adec-b0fa456faf26] Running
E0229 19:15:36.944041    5660 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-652900\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 17.0186763s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (17.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-652900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-652900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.29s)

                                                
                                    

Test skip (27/321)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (38.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 32.0531ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6x5gt" [692d212b-5fb5-4ff9-bbf8-8fce07396446] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0279071s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8jkqx" [d97b0f11-431c-49db-903a-9e6a156d39eb] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018779s
addons_test.go:340: (dbg) Run:  kubectl --context addons-850800 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-850800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-850800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (27.9114687s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (38.21s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (32.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-850800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-850800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:232: (dbg) Done: kubectl --context addons-850800 replace --force -f testdata\nginx-ingress-v1.yaml: (1.0268975s)
addons_test.go:245: (dbg) Run:  kubectl --context addons-850800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:245: (dbg) Done: kubectl --context addons-850800 replace --force -f testdata\nginx-pod-svc.yaml: (1.0608468s)
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fb3a6a49-20d5-4ba7-b855-1c6c93f3ff73] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fb3a6a49-20d5-4ba7-b855-1c6c93f3ff73] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 29.0082514s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-850800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-850800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (1.3804392s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-850800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0229 17:47:47.556269    5856 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (32.86s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-686300 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-686300 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 8736: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (38.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-686300 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-686300 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-hfkmd" [65a83b5e-419f-463b-8423-4ac1d81182e1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-hfkmd" [65a83b5e-419f-463b-8423-4ac1d81182e1] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 38.026767s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (38.59s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-238600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-238600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-238600: (1.2227087s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (20.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-652900 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
W0229 18:47:37.907085     720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
W0229 18:47:38.290975    5464 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
W0229 18:47:38.645356   13960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> host: crictl pods:
W0229 18:47:39.208515    9132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
W0229 18:47:39.542627    8612 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
W0229 18:47:41.594112    6340 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: ip a s:
W0229 18:47:41.895497    9328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: ip r s:
W0229 18:47:42.245203    6412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
W0229 18:47:42.548836   11372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
W0229 18:47:43.005248    8608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-652900" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
W0229 18:47:45.496413    1876 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
W0229 18:47:45.823383    3364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
W0229 18:47:46.989437   10892 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
W0229 18:47:48.068697    9276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
W0229 18:47:48.607915    3724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Thu, 29 Feb 2024 18:47:42 GMT
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://127.0.0.1:59240
name: missing-upgrade-251000
- cluster:
certificate-authority: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Thu, 29 Feb 2024 18:47:20 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://127.0.0.1:59095
name: pause-465700
contexts:
- context:
cluster: missing-upgrade-251000
extensions:
- extension:
last-update: Thu, 29 Feb 2024 18:47:42 GMT
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-251000
name: missing-upgrade-251000
- context:
cluster: pause-465700
extensions:
- extension:
last-update: Thu, 29 Feb 2024 18:47:20 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-465700
name: pause-465700
current-context: missing-upgrade-251000
kind: Config
preferences: {}
users:
- name: missing-upgrade-251000
user:
client-certificate: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\missing-upgrade-251000\client.crt
client-key: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\missing-upgrade-251000\client.key
- name: pause-465700
user:
client-certificate: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-465700\client.crt
client-key: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\pause-465700\client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-652900

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
W0229 18:47:49.290515    5224 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
W0229 18:47:49.583424     780 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
W0229 18:47:49.884339    5432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: docker system info:
W0229 18:47:50.391629    4536 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
W0229 18:47:50.664283    9480 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
W0229 18:47:51.128654   10552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
W0229 18:47:51.451246    6548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
W0229 18:47:51.720775    2180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
W0229 18:47:51.980395   10500 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
W0229 18:47:52.298778    3208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
W0229 18:47:52.599626    2500 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
W0229 18:47:52.856505   11300 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
W0229 18:47:53.154050    6828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
W0229 18:47:53.465161   11544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
W0229 18:47:53.750186   13460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
W0229 18:47:54.051226    1160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
W0229 18:47:54.359069    8060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                

                                                
                                                
>>> host: crio config:
W0229 18:47:54.628442   10528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube7\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-652900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-652900"

                                                
                                                
----------------------- debugLogs end: cilium-652900 [took: 19.0379683s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-652900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-652900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-652900: (1.2916754s)
--- SKIP: TestNetworkPlugins/group/cilium (20.33s)

                                                
                                    
Copied to clipboard